Você está na página 1de 123

HOW TO IDENTIFY CREDIBLE SOURCES ON THE WEB

by

Dax R. Norman
National Security Agency
PGIP Class 0001

Unclassified thesis submitted to the Faculty


of the Joint Military Intelligence College
in partial fulfillment of the requirements for the degree of
Master of Science of Strategic Intelligence.

19 December 2001

The views expressed in this paper are those of the author and
do not reflect the official policy or position of the
Department of Defense or the U.S. Government.
ACKNOWLEDGEMENTS

Foremost, I am thankful for the endless patience of my wife and

daughter, who for two years worked and played one man short of a full team,

and often carried the ball when I should have.

I am grateful to Professor Jerry P. Miller, Director of the Competitive

Intelligence Center at Simmons College in Boston, for his patient and

persistent help in constructing the thesis survey.

I would also like to thank LTC (ret) Karl Prinslow, at the time, a

contractor employed by the U.S. Army Foreign Military Studies Office, for his

practical assistance, and encouragement.

Thank you must also go to my Thesis Chairman, Dr. Alex Cummins and

Thesis Reader Robyn Winder for their conscientious support of the Joint

Military Intelligence College Master’s program by volunteering to serve as

Thesis Chairman and Reader.

ii
CONTENTS

List of Graphics……………………………………………………………………………..…. v

Chapter

Page

1. INTRODUCTION TO OPEN SOURCE EVALUATION…………………………...1

Validity Matters, 2
Credibility Counts More, 3
The Challenge of Credible Sources, 4
Assumptions, 6
A Unique Study, 6
Review of Thesis, 8

2. LITERATURE REVIEW……………………………………………………………...10

Range of Thought, 10
Every Man’s Printing Press, 17
Information Gaps, 18

3. METHODOLOGY……………………………………………………………………….19

Key Issue: OSINF Relevance to Intelligence, 19


Survey Development, 20
Research Question and Survey Structure, 22
Research Question: Credibility Criteria, 24
Research Question: Credible Enough to Use, 26
Key Issue: Official Credibility of Criteria, 29
Key Issue: Analyst’s Objectivity and Well Known Titles, 30
Key Issue: Foreign Language Sources, 31
Key Issue: Classified vs. Unclassified Sources, 32
Ethics, 33

4. FINDINGS…………………………………………………………………………...…..34

Key Issue: OSINF Relevance to Intelligence, 35


Research Question: How to Identify Credible Web Sites, 37
Survey Findings, Credibility Criteria, 47
Survey Findings, Credible Enough for Intelligence Use, 51
Survey Findings, Official Credibility Criteria, 53
Survey Findings, Objectivity and Foreign Language Sources, 55
Survey Findings, Classified vs. Unclassified Sources, 58

5. CONCLUSIONS……………………………………………………………..………….62

iii
Appendices

A. Web Site Evaluation Worksheets……………………………………….…66

B. Survey to Industry and Academia……………………………………..….77

C. Survey to Intelligence Community………………………………………..88

D. Criteria Analysts Currently Use to Judge Credibility………………..101

Bibliography………………………………………………………….………...…………..106

Annex 1. Survey Results (not included in original thesis.) ………………………109

iv
LIST OF GRAPHICS

Tables

Page

1. Question 8a to 8r, Recommended Criteria and Relative Values (Mean)…


….48

2. Questions 9a-f. Required Level of Source Credibility”


for Intelligence Products. ……………………...…………………………..………
…....53

3. Question 5. Part 1, Official Criteria for Unclassified Sources……………

…....54

4. Question 5. Part 2, Official Criteria for Classified Sources…………………

…55

5. Questions 7a, b, c, j, k, l, m, Credibility of Well-Known Titles……………...

…57

6. Questions 7d, e, f, g, h, i, Credibility of Obscure Titles, and


Foreign Web Sites…………………………………………………………………………

57

7. Questions 7n to 7s, Credibility of All Classified Sources……………………

….59

8. Credibility of Open Sources Compared to Classified Sources……………...

….60

9. Question 7q, Credibility of IMINT Without Annotations…………………..

…..61

v
10. Benchmark Web Site Evaluation Work Sheet, Spot…………………………

…66

11. Benchmark Web Site Evaluation Work Sheet, ITU……………………………

69

12. Benchmark Web Site Evaluation Work Sheet, NY Times……………….…

…71

13.Benchmark Web Site Evaluation Work Sheet, Korea………………………..

…73

14.Blank Web Site Evaluation Work Sheet……………………………………….….

76

15.Survey Question 6: Credibility Criteria Analysts Currently Use…………...

101

Graph

1. Question 7q, Credibility of IMINT Without Annotations…………………..

…..61

vi
ABSTRACT

TITLE OF THESIS: How to Identify Credible Sources on the Web.

STUDENT: Dax R. Norman

CLASS NO. PGIP 0001 DATE: 19 December 2001

THESIS COMMITTEE CHAIR: Dr. Alex Cummins

SECOND COMMITTEE MEMBER: Robyn Winder

There is little argument today that open sources and the World-Wide-

Web have a role to play in intelligence, but little has been written about

evaluating the credibility of Web sites and communicating that evaluation to

analysts. Such a capability is needed because of the increased opportunity to

collect open source intelligence from the Web; the ever increasing cost of

classified collection; and the ever-present demand on analysts to analyze and

report at the edge of their knowledge. With so many intelligence sources

available, including the Web, analysts must be able to identify credible

sources. The alternative is to evaluate every piece of information collected

from every Web site of intelligence interest. Due to the enormous size of the

Web, evaluating data validity is not practical.

That is why the Intelligence Community (IC) needs a generally agreed

upon set of criteria for evaluating Web sites of potential intelligence value.

Credible Web sites can be identified. However, without these criteria, and a

method to share the results, hundreds of analysts will repeatedly find the

same Web sites of dubious credibility as other analysts; they will attempt to
evaluate the sites’ usefulness and credibility by many widely different

standards, and share their results with only a few close coworkers. The

quality of these Web site evaluations will vary widely based on the subject of

the Web site and the subject expertise of the evaluator.

This thesis collected criteria recommended by professional Web

searchers and surveyed industry, academia, and the Intelligence Community

for their opinions of those criteria. From this survey the author developed a

weighted list of credibility criteria and a methodology that both the subject-

matter expert and the subject-matter novice will find useful. With these

criteria and the relative credibility scale, subject-matter experts throughout

the IC can evaluate Web sites within their area of expertise and share that

source evaluation with the entire IC.

This thesis identifies valid criteria for evaluating the credibility of open

source Web sites; presents a relative credibility scale based on benchmarked

Web sites; identifies the target level of credibility for all intelligence sources;

offers a Web site evaluation worksheet; and compares the credibility of open

sources to classified sources. Credible information can be located on the Web,

and although subject-matter experts are the best evaluators, any analyst can

evaluate a Web site when he does not have a subject-matter expert to assist

him.
CHAPTER 1

INTRODUCTION TO OPEN SOURCE EVALUATION

Along with the information technology revolution has come an equally

important increase in information access and information sources via the

World-Wide-Web. However, such abundance is a double-edged sword because

the Web contains every type of print, audio, and visual data from every type

of source, including children, students, professors, conspiracy theorists,

researchers, advertisers, government data, and government misinformation.

Information analysts must sort the useful information from the junk.

However, what is useless for one person may be just right for someone else.

This thesis will establish Intelligence Community criteria for identifying

credible Web sites from untrustworthy, or non-credible Web sites. This thesis

used a survey structured to answer several key issues and the research

question: how to identify credible sources on the Web. The hypothesis was

that credible Web sites can be confidently identified by evaluating the Web

sites based on criteria recommended by professional Web searchers and

agreed to by intelligence analysts. Most analysts today apparently evaluate

the data rather than the source.

1
VALIDITY MATTERS

This thesis will also show that most analysts do not attempt to identify

credible sources, but evaluate the validity of the data in the sources. There

is a common misunderstanding about validity and credibility. Validity is an

attribute of information. Validity also describes information as

simultaneously relevant and meaningful. Validity can also refer to the proper

use of logic to reach a conclusion.1 In psychometrics, validity can have

several meanings, including the proper use, or function of a measurement

tool.2 This thesis uses validity as an attribute of data that is verifiably correct.

Validity is what the analyst means when he asks, is this data correct?

Although validity is important to intelligence, it always describes the

information rather than the source, and alone does not measure believability,

which this thesis calls credibility. Because discrete elements of information

can be examined and compared, the validity of information is of most

concern to analysts because analysts know how to check validity. They

examine the data for consistency, verify it with other sources, or verify that it

functions as expected. Although consistently valid data can lead to credible

sources, the goal should be to identify sources as credible so that every

document from the source does not have to be validated. Establishing

source credibility should be of greater interest to analysts because they

cannot become expert in every subject on which they may be expected to

1
G. & C. Merriam Co., Webster’s New Collegiate Dictionary (Springfield, MA:
G. & G. Merriam Co., 1975), under “Valid.” Cited hereafter as Webster’s.
2
Jum C. Nunnally, Psychometric Theory (New York: McGraw-Hill Book
Company, 1967), 75.

2
report, because organization focus changes, analysts change jobs, and there

just is not enough time to learn it all and still report.

This thesis will provide a tool for the general analysts to evaluate Web

sites as potential intelligence sources. Although Web site evaluations are

best done by subject-matter experts, analysts are often expected to report on

unfamiliar topics, and must discern for themselves if a source is credible.

Experts will also be able to use the recommended criteria and credibility

scale to evaluate Web sites in a consistent manner that other people will

understand, and can repeat.3

CREDIBILITY COUNTS MORE

To judge validity, an analyst must understand the issue, or technology,

or strategy, or politics very well for every data element included in his

reporting. Because every analyst cannot possibly be an expert on every

subject, they rely on sources that they trust to provide valid data. This trust

in a person or group is a measure of credibility. A credible source offers

“reasonable grounds for being believed.”4 This is the meaning intended in

this thesis for credibility.

These credible sources are an essential element of intelligence

analyses because analysts are often expected to report on topics, in which

they are not expert, or that are too complex for any one person to

3
See Appendix A, Web Site Evaluation Worksheet, for the relative credibility
scale, benchmark Web site evaluation worksheets, and a blank evaluation
worksheet.
4
Webster’s, under “Credible.”

3
understand. Because it is impractical for analysts to validate every data

element from every source, the focus should be on identifying credible

sources. In the area of Open Source Intelligence (OSINT), this is even more

important because of the widespread use of OSINT by the other intelligence

disciplines, and the multitude of unclassified open sources.5 The source must

be judged credible before the data can be judged valid. Of course this can

become a circular argument, but in the end it is more useful to have a

credible source than a valid data element. For example, it would be better to

know where to find a foreign leader’s official travel schedule, than to know

where the leader will travel next. This is true because this credible source

can tell one where the next trip will be, any changes to his next trip, and the

details of subsequent trips. If a source provides valid data consistently, it

will soon be judged a credible source. However, once judged credible, it is

less important that every data element the source provides is validated.

Note that open source information (OSINF) is public or proprietary

information available to anyone for a fee or for free. OSINF becomes open

source intelligence (OSINT) when it is used by the Intelligence Community to

answer a intelligence question.

THE CHALLENGE OF CREDIBLE SOURCES

Regardless of the credibility of a source, or the validity of the data,

analysts are more likely to use the sources most accessible to them. The
5
Joint Chiefs of Staff, Joint Pub 1-02, Department of Defense Dictionary of
Military and Associated Terms, URL:
<http://www.dtic.mil/doctrine/jel/doddict/data/f/02542.html>, accessed 13 February
2000. Cited hereafter as Joint Pub 1-02. This thesis uses intelligence disciplines, such
as OSINT, as defined in Joint Pub 1-02.

4
Web has the potential to put a worldwide library on the desk of every analyst.

With today’s search engines and Web-directories an analyst can conduct a

single search of the Web in seconds that would take a librarian a career to

complete. This is because the librarians know which sources are credible

based on their own use of the sources or recommendations from other

librarians and subject-matter experts. Therefore, it stands to reason that

intelligence analysts, who do not have access to a subject-matter expert on

every reportable issue, should have access to credible information sources on

the Web. How to identify credible sources on the Web is the challenge of this

thesis.

In an ideal world, subject-matter experts in every field would identify

credible sources, and index them for everyone to use. However, even in such

a world there would be disagreement on what is credible. Therefore, the

research question that this thesis will answer is how to identify credible

sources on the Web. The focus is on Web sites because library science and

publishers have already established acceptable standards in the print media

for credibility. Such standards include peer-review in scientific journals,

editorial review in newspapers, independent verification of facts, and the

proper labeling of commentary and advertisements in magazines. In the

absence of such standard practices on the Web, it is up to the reader to

judge. With the help of expert Web searchers from industry, defense, and

intelligence, this thesis establishes a set of common credibility evaluation

criteria, which can be used by subject-matter experts as well as analysts

reporting on an unfamiliar issue. Some subjectivity remains, but the criteria

5
are established which provide analysts with the tools and vocabulary to

measure credibility of sources and describe a source’s relative

trustworthiness, known as credibility.

ASSUMPTIONS

This thesis does make some assumptions. The first two are that open

source intelligence is less costly than classified intelligence, and therefore is

the preferred source if it can be trusted. The third assumption is that

credibility is relative to its intended use and user. For example, a CNN

broadcast might be sufficiently credible for indications and warning (I&W),

but not sufficiently credible for basic intelligence for which the analyst has

some time to conduct research, or when the product will become the

background for future reporting. Likewise, a second-hand report of the

humanitarian conditions in a country may be credible enough for a person

planning an overseas visit; however, only a first-hand report from an

authoritative, unbiased source may be considered for the subject of an

intelligence report. Therefore, a relative credibility scale is necessary rather

than an absolute determination of credible or non-credible.

A UNIQUE STUDY

Although other studies establish criteria for evaluating Web sites, such

as Alison Cooke’s Authoritative Guide to Evaluating Information on the

Internet, I have not found a study that focuses on establishing the credibility

6
of Web sites.6 Cooke’s work is an excellent guide to evaluating the overall

quality of many types of Web sites. The closest Joint Military Intelligence

College study found is MAJ Robert M. Simmons’s unclassified thesis, Open

Source Intelligence: An Examination of Its Exploitation, 1995.7 Simmons

focuses on the accessibility and use of open source, not the credibility of

sources. Although Reva Basch’s Secrets of the Super Net Searchers includes

the question of credibility, it is less formal than this study and asks the

credibility question differently of each expert interviewed.8 Secrets of the

Super Net Searchers does not focus on any one issue, but asks many

questions of the industry experts. However, many criteria from Basch’s book

were included in the thesis survey used for this study. This thesis surveyed

analysts from defense, intelligence, and academia, as well as industry, to

establish common criteria for evaluating the credibility of Web sites.9 The

broad survey population, which included industry, academia, and

intelligence, and the focus on credibility, make this study unique.

REVIEW OF THESIS

6
Alison Cooke, Authoritative Guide to Evaluating Information on the Internet
(New York: Neal-Schuman Publishers, Inc., 1999).
7
Major Robert M. Simmons, USA, Open Source Intelligence: An Examination
of Its Exploitation in the Defense Intelligence Community, MSSI Thesis (Washington,
DC: Joint Military Intelligence College, August 1995.)
8
Reva Basch, Secrets of the Super Net Searchers (Wilton, CT : Pemberton
Press, 1996).

9
E-mail Survey, “Joint Military Intelligence College Thesis Survey: Credibility
Criteria for Web Sites,” conducted by the author, July-August 2001. Hereafter cited
as Survey.

7
The research for this thesis began with a literature review, found in

Chapter two. From the literature several authors were selected who either

represent a significant point of view or are in a position to influence other

analysts. The objective of the literature review was to identify what is

already known, or thought about identifying credible sources on the Web.

However, the literature also revealed tangent issues that influence how or

when unclassified open sources are used in intelligence products. Most

significantly, the literature review identified the criteria recommended by

expert Web searchers for judging the credibility of Web sites. Those criteria

were included in the thesis survey, which was the primary research tool used

by the author.

Chapter three describes the research methodology employed. That

methodology included gathering expert criteria from the literature review;

developing and administering the survey to both industry, academic, and

intelligence analysts, coding the survey results and entering the data into the

SPSS statistical program; and performing the calculations which answered

the research questions and the key issues. The recommended credibility

criteria were determined by identifying the criteria that analysts most often

rated as contributing 50 percent or more to the credibility of a Web site; then

determining the relative weights for each criterion and a relative credibility

scale. Finally, four Web sites of known credibility were evaluated as

benchmark sites. Chapter three describes this process in detail as well as

how the target source-credibility level was determined for most intelligence

products.

8
The results of the survey calculations are shown in the findings

Chapter four. The findings chapter, like the methodology chapter, is

organized to answer the research question and each key issue, which in short

include the following key issues: open source relevance to intelligence,

knowledge of existing official criteria, analysts’ objectivity, credibility of

foreign Web sites in English, credibility of classified versus unclassified

sources; and the research questions of evaluation criteria, and needed level

of credibility,

The conclusions are in Chapter five, and include analysis of the survey

results. The thesis concludes that credible Web sites can be identified,

evaluated, and shared with other analysts. Known weaknesses in the survey

are mentioned in the findings and conclusions chapters. Chapter six also

includes a recommendation for implementing this evaluation procedure in

the Intelligence Community. The appendices include a copy of the surveys

used; the competed evaluation worksheets for the benchmarked Web sites;

and a blank evaluation worksheet.

9
CHAPTER 2

LITERATURE REVIEW

RANGE OF THOUGHT

Open source information (OSINF) has been widely accepted as a

necessary element of all-source intelligence reporting, as demonstrated by

Director of Central Intelligence Directive 2/12, which established the

Community Open Source Program Office.10 Most experts agree that OSINF

should support classified intelligence collection. However, I think there has

not been significant attention paid to the issue of identifying credible Web

sites, a significant source of unclassified information. The Web makes foreign

newspapers and “gray” literature (documents with limited distribution such

as company brochures, or equipment manuals), more accessible, as well as

expert opinions, and research projects from universities, just to name some

valuable sources.11 The issue of identifying credible Web sites affects

everyone who uses the Internet, including defense, intelligence, academia,

and industry. Therefore, the literature reviewed for this study included

documents from all of these communities of interest. The authors presented

in this study include: Robert David Steele of Open Source Solutions Inc.; Dr.

Wyn Bowen of Kings College, London, writing for Jane’s Intelligence Review;

A. Denis Clift, President of the Joint Military Intelligence College (JMIC),

Washington, D.C.; Reva Basch, author of Secrets of the Super Net Searchers;
10
Director of Central Intelligence, Director of Central Intelligence Directive
2/12 (Washington, D.C.: n.p., 1 March 1994). Hereafter cited as DCID 2/12.
11
Basch, 110.

10
and Allison Cooke, author of Authoritative Guide to Evaluating Information on

the Internet. These authors are all in a position to influence information

analysts, either inside or outside of government, and represent a range of

opinions on the proper use of open source information.

All these points of view agree that there is more data available now

than an analyst can manage unaided. Their approach is what differs. Steele

and Bowen would expand the Intelligence Community, which is not going to

happen without a long, and gradual culture change. Clift sees a need for

better automated tools for data retrieval, including an on-line index of open

sources .12 Cooke and Basch offer solutions for today: evaluate sources based

on criteria similar to those used for traditional print media. This thesis will

demonstrate that the ideas of each of these authors combined with the

recommend evaluation criteria in this thesis, represent a practical solution to

the information fog of the Web.

Robert David Steele, Open Source Solutions, Inc.

Steele is the most vocal advocate for expanded use of OSINF to

support the other intelligence disciplines, and recommends expanding the

Intelligence Community to include business people and academics, who have

unique knowledge and access. Steele would have analysts consult open

sources first, including subject experts in industry and academia, and then

classified sources. He is President of Open Source Solutions Inc. His

company is in the private open source intelligence (OSINT) business, and he

12
A. Denis Clift, Clift Notes: Intelligence and the Nation’s Security
(Washington, D.C.: Joint Military Intelligence College, 1999), 51-57.

11
has proposed his own plan for intelligence in the 21st Century, called

Intelligence and Counterintelligence: Proposed Program for the 21st

Century.13 Steele sees a great need to expand the access that analysts have

to OSINF.14 His view of the future Intelligence Community (IC) includes

several new groups, including scholars and business people, which constitute

the Virtual IC.15 It is these sources that Steele sees as the gold mine of

information. However, he does acknowledge that the Internet will greatly

expand access to OSINF, primarily secondary sources, which are derived from

an original source. He also suggests that OSINF may be used as a source of

“tip-offs” to serious issues that warrant classified collection.16 However, his

stand that classified intelligence is only useful in the context of what is

already known from open sources borders on accepted practice.

Dr. Wyn Bowen, Open-source Intelligence.

Bowen is an academic concerned about information overload, and

would add non-government subject-matter experts to the intelligence

collection process, as Steele suggests. Bowen thinks that subject-matter

experts should be the people to evaluate Web sites, which is unique in this

literature review. However, he sees open sources as an adjunct to classified

sources, not the source of first resort as Steele suggests. Bowen, who is a
13
Robert D. Steele, Intelligence and Counterintelligence: Proposed Program
for the 21st Century, URL: <http://www.oss.net/OSS21>, accessed 5 January 2000.
Cited hereafter as Steele, Intelligence.

14
Steele, Intelligence, under “Introduction.”
15
Steele, Intelligence, under “Part III” Figure 18.
16
Steele, Intelligence, under “Part III.”

12
professor at Kings College, London, and writes for Jane’s Intelligence Review,

demonstrates the invaluable resources available through open sources in his

article Open-source Intelligence: A Valuable National Security Resource.17 He

uses weapons proliferation as a demonstration case. This case is very

effective because it reduces the issue to tangible products of intelligence

value found in the public domain. Bowen thinks that the role of OSINF is to

provide the context of classified information.18 He also dwells on the issue of

information overload, which concerns Clift. However, he would add non-

government subject-matter experts to the collection process, as Steele also

suggests. Bowen thinks the experts’ role should be to identify the useful

sources to keep and collect, (not specific data) and the worthless sources to

ignore. In his view, experts would also serve to evaluate sources for

inaccuracy, bias, irrelevance and disinformation, which non-experts would

find difficult to do.19

17
Dr. Wyn Bowen, “Intelligence: A Valuable National Security Resource,”
Jane’s Intelligence Review, 1 November 1999, Dow Jones Interactive, “Publications
Library,” “All Publications,” Search Terms “Open Source Intelligence,” URL: <
http://djinteractive.com>, accessed on 4 March 2000.

18
Bowen, under “Technical Sources.”
19
Bowen, under “Conclusion.”

13
A. Denis Clift, President of the Joint Military Intelligence

College

Clift is also concerned about information overload, and sees a need for

better automated selection tools to solve the analysts’ selection problems.

Clift is President of the Joint Military Intelligence College (JMIC) in Washington,

D.C. His views are his own and do not represent that of the U.S.

Government; however, as President of the JMIC, Clift is in a position to

influence the opinions of analysts graduating and going on to work in

intelligence. He also served as Editor for the United States Naval Institute

Proceedings, early in his career, from 1963 to 1966.

In Chapter five of Clift Notes: Intelligence and the Nation’s Security,

Clift gives a short explanation of the open source programs available today to

support the intelligence analyst.20 He defends the Intelligence Community’s

record on making open source information (OSINF) available to intelligence

analysts. He gives an overview of the OSINF programs available to the

analysts, but does not indicate how accessible the information is. I observed

lines of analysts waiting to use Internet terminals in the JMIC library in 1999

and 2000. This is an example of why it should be clear to the Intelligence

Community (IC) that OSINF will only be used to its highest potential when it is

on the analyst’s desk. The work lost walking to a terminal down the hall or in

the next building is not worth the effort to analysts unfamiliar with the

sources, or inundated with other sources at their finger tips. Clift writes that

OSINF plays an important role in intelligence, and states that the IC already

has a good collection of OSINF in Central Information Reference and Control

20
Clift, 51-57.

14
(CIRC) of the National Air Intelligence Center and the Defense Scientific and

Technical Intelligence Centers.21 He notes the serious difficulties analysts

have within formation overload and the need for better-automated selection

tools.22 However, the technology Clift wants is not yet intelligent enough to

discern credible sources from non-credible sources. As will be demonstrated

in the findings chapter, determination of credibility requires research, and

corroboration, and has a measure of subjectivity.

Reva Basch’s Secrets of the Super Net Searchers

Basch does not address the Intelligence Community, but does address

the issue of how to select trustworthy Web sites. Basch, as well as Cooke,

takes the most practical approach to finding credible information in the flood

of electronic data. Both recommend using evaluation criteria similar to that

used for print media, with some variations.

Basch published Secrets of the Super Net Searchers in 1996, after

interviewing 35 of the best Internet searchers. In 1996, she was the news

editor for ONLINE, DATABASE, and ONLINE USER magazines and had been an

online researcher for about 21 years. Since then, she has published a series

of Super Searchers books. For Secrets of the Super Net Searchers she

conducted informal interviews with expert researchers, each of which

represents a chapter in Super Searchers. Her questions covered many issues

21
Clift, 54.
22
Clift, 56.

15
affecting online researchers and included the following, which relate to Web
23
site credibility:

• What is the quality and reliability of information on the Web?

• Are some types of sites more reliable than others?

• How are biased sources treated?

• How are the quality and reliability of unfamiliar Web sites judged?

• Is there a relationship between credibility and longevity?

Many of the experts Basch interviewed had something useful to say

about source credibility, which were consolidated into several survey

questions for this thesis.

There is disagreement whether information from personal Web sites is

credible. Susan Feldman stated in Super Net Searchers that a “Web site

written by “Joe Schmo” might be way ahead of McGraw-Hill. So you’re left to

your own devices to analyze and evaluate.”24 However, Mary Ellen Bates, also

interviewed by Basch for Super Net Searchers, stated at a WebSearch

conference in Virginia on 10 May 2001 that she does not rely on personal

Web sites unless they are well known.25

Alison Cooke, Authoritative Guide to Evaluating Information on


the Internet

Cooke also does not address the Intelligence Community, but does

address the issue of how to select trustworthy Web sites. Cooke also

23
Basch, 3.
24
Basch, 31.
25
Mary Ellen Bates, Presentation to WebSearch University Conference in
Reston, VA, 10 September 2001.

16
recommends using evaluation criteria similar to that used for print media,

with some variations.

Alison Cooke, who is a professional Internet searcher, wrote in 1999

the Authoritative Guide to Evaluating Information on the Internet. The

author’s implicit thesis is that although there is much useless, outdated, and

difficult to authenticate information on the Internet, high quality information

can be found and the quality can be assessed.26 Like Clift and Bowen, Cooke

sees information overload as a serious challenge facing researchers, but

believes accuracy is of most concern to researchers. Her solution is to

carefully evaluate Web sites using criteria similar to criteria used to evaluate

print media.

EVERY MAN’S PRINTING PRESS

There are widely accepted criteria for evaluating traditional print

media. These criteria include the reputation of the publisher and author,

peer-review of scientific articles, and editorial review of periodicals.27 Such

criteria work well when the number of publishers in a particular field are

quantifiable and their past work can be located and reviewed. However,

desktop publishing programs, personal computers, and the Web have

enabled hundreds of thousands of people to produce professional-looking

articles and distribute them to millions of potential readers without the

26
Alison Cooke, Authoritative.
27
Jan Alexander and Marsha Tate, “The Web as a Research Tool: Evaluation
Techniques,” Wolfgram Memorial Library, Widener University, Chester, PA, URL:
<http://www.science.widener.edu/~withers.evalout.htm,> accessed 13 March 2001.

17
benefit of peer or editorial review, or regard for brand name reputation.

Among the millions of Web pages available to the public today are many of

potential intelligence value produced by proud inventors, boisterous

government agencies, self-promoting corporations, community-minded

colleges, naïve public servants, happy vacationers, and zealous

revolutionaries. The issue at hand today is how to identify credible

information among the millions of personal, organizational, industry,

academic, and government sources. There are as many opinions on this

topic as there are open source researchers and intelligence analysts.

INFORMATION GAPS

Even after a Web site is evaluated based on the criteria presented in

Basch, Cooke or Alexander, the issue of credibility still remains. How does a

subject-matter novice know which sources he can believe? The other issue is

that of relativity. Is a Web site that is credible enough for a high school term

paper also credible enough for a basic intelligence report, or for an

intelligence warning report. This study answers both of these questions.

18
CHAPTER 3

METHODOLOGY

This chapter on methodology and the following chapter on findings are

organized by key issues and research questions. The key issues are

obstacles that must be overcome before the research question can be

answered. The key issues include: how is open source information relevant

to intelligence; do analysts know of existing official credibility criteria; are

analysts biased toward popular source titles; are foreign sites in English less

credible; and how does the credibility of classified sources compare to

unclassified sources? To answer the research question of, how to identify

credible sources on the Web, it was necessary to separate the question into

two parts. The first part of the research question was what criteria can be

use to identify credible Web sites. The second part of the research question

was how credible should any intelligence source be. The methodology relies

on logic, and statistics, and is somewhat complex due to the many steps

necessary to arrive at useful criteria, which is accurately weighted. The

methodology begins with the development of the thesis survey.

KEY ISSUE: OSINF RELEVANCE TO INTELLIGENCE

Even before the survey could be developed, the basic question needed

to be answered: why is open source information relevant to intelligence? The

19
literature review provided several views on the role of open sources in

intelligence. The opinions of Steele and Clift offered convincing reasons that

intelligence must include open source information. The reasons for using

OSINF in intelligence products are included in the findings chapter.

SURVEY DEVELOPMENT

Although the primary research question was, how to identify credible

sources on the Web, this thesis needed to answer several key issues

regarding source credibility on the way to answering the primary research

question. Two research methods were used to answer the key issues and

research question. First, published literature was reviewed from Intelink,

online DIA course material, Lexis-Nexus, Dow Jones Interactive, the NSA

Library, and academic Web pages. This literature review uncovered some

answers to the key issues and provided the majority of the concepts tested

by the thesis survey.

Once the thesis survey was developed, it was given to a test

population of 15 intelligence analysts for a validity check. The 15 analysts

completed the survey, and suggested adding questions, clarifying ambiguous

wording, and questioned the relevance of some questions. Those changes

were made and the second draft was given to Professor Jerry P. Miller,

Director of the Competitive Intelligence Center at Simmons College in Boston.

Miller offered numerous suggestions that improved the reliability of the

survey. He identified government “lingo” that would not likely be

understood in industry and academia, and recommended changes to the

20
survey questions to maintain Likert-type scales for the responses. Likert

scales are a recognized method in social sciences to format survey response

options that are understood by most populations and can be used to measure

evenly a population’s opinions.

The second draft was also sent to LTC (ret) Karl Prinslow, project

manager and operations officer of a virtual organization that employs over

150 military reservists who work via telecommuting to collect and acquire

open source information in support of the Intelligence Community's

requirements. Prinslow suggested several format changes that insured all

recipients were able to display the survey on their computers, and would be

comfortable replying with anonymity. Prinslow and Miller suggested adding

the personal information disclosure statement. Prinslow also recommended

E-mailing the survey as an ASCII text message rather than a MS-Word

document, and simplified some questions. The text message enabled

anyone who was able to receive the E-mailed survey to respond to it without

special software.

After making the changes suggested by Miller and Prinslow, two

separate surveys were distributed by E-mail. In the coding and analysis, the

two surveys were treated as one survey, with some questions not applicable

to the whole population. The Intelligence Community (IC) Survey included

several questions at the end, which would not apply to industry or academia,

and it was distributed by internal communications. The Industry Survey

included the same questions as the IC Survey without the IC-unique

questions. The IC Survey was E-mailed to a group of about 100 IC analysts

who have an interest in open source intelligence (OSINT). The exact number

21
of IC analysts cannot be determined because it was sent to a mail-list, which

often changes. This method had the effect of randomizing the population

selection. One of these 100 analysts E-mailed the survey to 18 other IC

analysts. Four of these 18 E-mailed the survey to 238 others, for a total of

356 IC analysts. This chain of events was evident from the E-mail headings

and some respondents informed the author who forwarded the survey to

them. About 50 participants from a Society for Competitive Intelligence

Professional (SCIP) conference were then contacted by telephone and agreed

to participate in the E-mail Industry Survey. The Industry Survey was then E-

mailed to those 50 and 9 Defense Department analysts. One of the 9

Defense analysts E-mailed the survey to about 120 other defense analysts. A

total of about 179 analysts are known to have received the Industry Survey.

Together, the two surveys reached about 535 analysts who have an interest

in Internet research. With 66 responses, this equates to a 12.3 percent

response rate from a randomly selected population.28

RESEARCH QUESTION AND SURVEY STRUCTURE

The survey was structured to answer several key issues and the

research question: how to identify credible sources on the Web. The

hypothesis was that credible Web sites can be confidently identified by

evaluating the Web sites based on criteria recommended by professional Web

searchers and agreed to by intelligence analysts. The thesis survey asked

this question directly in survey question 6, and indirectly in survey questions

28
Appendices B and C include a copy of the E-mailed surveys.

22
8a through 8r. Question 8 listed the criteria most often mentioned by

published experts. Here is how the survey asked these questions.29

6. List up to five criteria that you use to determine the credibility of


any information source.
a.
b.
c.
d.
e.

8. How much credibility does each of the following factors add to the
total credibility of a Web site? Use the following scale:

___6) 100 percent Credibility


___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility

a. Recommended by a subject-matter expert.


b. Recommended by a generalist.
c. Listed by an Internet subject guide that evaluates Web sites.
d. Listed in a search engine such as Alta Vista.
e. Listed in a Web-directory organized by people, such as yahoo.
f. Content is perceived current.
g. Content is perceived accurate.
h. A peer or editor reviewed the content.
i. Content's bias is obvious.
j. Author is reputable.
k. Author is associated with a reputable organization.
l. Publisher, or Web-host is reputable.
m. Content can be corroborated with other sources.
n. Other Web sites link to or give credit to the evaluated site.
o. The server or Internet domain is a recognized copyrighted or
trademark name such as IBM.com ,
p. There is a statement of attribution.
q. Professional appearance of the Web site.
r. Professional writing style of the Web site.

To avoid influencing the responses to survey question 6, analysts were

first asked to list the criteria they currently use; they were later asked to

29
Survey, questions 6 and 8.

23
evaluate the list of criteria in questions 8a through 8r. If the survey

population had been asked about specific criteria (question 8) before being

asked what criteria they actually use (question 6), they may have been

influenced to include the listed criteria from question 8 as criteria that they

use. This arrangement was necessary because earlier discussions with

analysts revealed that there were criteria that analysts would use only after

they were told of them. Discussions with analysts prior to the survey

development had also revealed that many analyst do not know how they

determine what is a credible source, and that many analysts may only

evaluate the data, and not the source.

As is shown in the findings chapter, many analysts were confused

about the difference between data validity and source credibility. The

categorized results of question 6 were then compared to the specific criteria

analysts approved of in question 8.

RESEARCH QUESTION: CREDIBILITY CRITERIA

The results of questions 6, and 8a through 8r were used to develop the

recommended credibility criteria and credibility scale in the findings chapter.

The recommended criteria were determined by computing the mode (score

most-often chosen) for each criterion in survey questions 8a through 8r, and

to avoid influencing the responses to survey question 6. An unusual amount

of variance would indicate little agreement among the analysts. Only criteria

from question 8 that scored a mode of 50-percent credibility or greater were

included in the recommended criteria list. This means analysts most often

24
believe (mode) that the satisfaction of any one of these recommended

criteria made the source at least 50-percent credible.

Then the arithmetic mean (average) credibility was calculated for each

recommended criterion from question 8 and became that criterion’s relative

value. The relative value is how much more important, on average, analysts

think one criterion is than another criterion. The assumption here is that

such attributes are cumulative, and the more recommended criteria a site

satisfies, the more credible is the site.

The results of question 6 were categorized into a list of criteria that

analysts think they use to evaluate source credibility. The frequencies of

these criteria were calculated, and those criteria that were suggested by 50

percent of the analysts were added to the recommended criteria list.

Because the recommended criteria from question 6 were not evaluated on a

scale in the survey, they were arbitrarily assigned the average relative value

of those recommended criteria from question 8. This allowed the inclusion of

any criteria not included in question 8, but also did not significantly affect the

relative values of those criteria.

The following is a summary of the selection process for the

recommended criteria, and relative value calculation:

Step 1. Calculated the mode (most-often chosen) credibility (0-100


percent) of each criterion from survey question 8.

Step 2. Listed as recommended the criteria from question 8 that had a


mode credibility of 50 percent or greater.

Step 3. Calculated the mean credibility (average analyst chosen score)


for each recommended criteria from question 8.

25
Step 4. From question 6, added to the list of recommended criteria,
those criteria not already on the recommended list, and that had a mean
occurrence of 50 percent or greater (at least half the analysts listed the
criteria).

Step 5. Calculated the mean credibility of all the recommended criteria


from question 8, and assign that average credibility to each of the additional
recommended criteria from question 6.

Step 6. List all the recommended criteria and their individual mean
credibility as their relative values. 30

The criteria’s relative value can then be used to evaluate a Web site.

When evaluating a Web site for credibility, the relative values can be

summed for the criteria that the evaluated site satisfies. The site credibility

score can be compared to other known credible sites listed latter in this

chapter as benchmark Web sites.31

RESEARCH QUESTION: CREDIBLE ENOUGH TO USE

Even after an analyst has calculated the credibility score of a Web site,

he must know how credible a source must be to justifiably include it in an

intelligence report. Therefore, survey question 9a, b, c, d, e, and f. asked:32

9. How credible must an intelligence source be to use its data in the following
intelligence products? Use includes when you would use qualifiers such as
"possible survived". Choose the required level of credibility for each type of
intelligence.

30
See Table 1 in the findings chapter for the list of recommended criteria and
their relative values.
31
See Appendix A for the benchmarked Web site evaluation worksheets.
32
Survey, questions 9a – 9f.

26
Scale:
___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible

Analyst were the asked to choose the required level of credibility for:

9a. Research, or topic summaries.


9b. Current, day-to-day developments.
9c. Estimative, identifies trends or forecasts opportunities or threats.
9d. Operational, tailored, focused to support an activity.
9e. Scientific, and technical, in-depth, focused assessments.
9f. Warning, an alert to take action.

The mode response for each of these types of intelligence products

was calculated and is the product-credibility levels, which are shown in Table

2 in the findings chapter. The product-credibility levels percentages were

converted into a score so that analysts can simple add the results of an

evaluation and compare the sum to the table of product-credibility levels.

The product-credibility level is also the credibility level that is needed

for sources that analysts use for a particular intelligence product. When a

potential Web site is evaluated, the analyst calculates the credibility score of

the evaluated site, and then compares it to the table of product-credibility

levels in Table 2. The sum of the evaluated Web site should be at least equal

to the product-credibility level of that type of intelligence product shown in

the table. The source-credibility level of each intelligence product type was

determined by calculating the percentage of a benchmarked very credible

Web site’s score which would equal the product-credibility level that was

recommended by the surveyed analysts.. For example, here is a theoretical

27
Web site evaluation, which also demonstrates how the product-credibility

level was determined.

Example:

Benchmark site credibility score = 46.75 points (100 percent Credible)


Product-credibility level of intelligence product: 35.06 (75 percent of
46.75).
Theoretical results of a Web site evaluation:

Meets Criteria 1 = 5 points


Meets Criteria 3 = 6 points
Meets Criteria 4 = 3 points
Meets Criteria 5 = 3.5 points
Meets Criteria 6 = 5 points
Meets Criteria 7= 4.5 points
Meets Criteria 10 = 2 points
Meets Criteria 11 = 3 points
Meets Criteria 12 = 3.5
Meets Criteria 13 = 1.5
Meets Criteria 14 = 4.5

Sum of Evaluated Site = 38 points


Result: Exceeds the product-credibility level of 35.06

This summarizes the process recommended in this thesis to evaluate

the credibility of a Web site. This process is based on the theory that the

criteria recommended by expert Web searchers and approved by most

analysts are the best criteria for evaluating Web sites. The weight or relative

value of each criterion is based on the average score given the criterion by

analysts. The final evaluation is based on a comparison of the total values of

the evaluated site to the total values of the benchmark sites.

28
KEY ISSUE: OFFICIAL CREDIBILITY CRITERIA

It also seemed important to know what the Intelligence Community’s

official criteria are for evaluating the credibility of sources. However, after

failing to identify such a policy, it became more relevant to know if analysts

were aware of such a policy. It reasoned that if analysts were not aware of

such a policy, its existence was irrelevant. The survey results of this question

would determine if consistent credibility criteria are used in the Intelligence

Community. The lack of such criteria may call into question the consistency

of intelligence reporting. Therefore, question 5 asked:33

5. Does your organization have official criteria that you are told to use
for determining the credibility of any source? "Any source" means published,
proprietary, and classified sources. Choose all that apply:

___a. Yes, I know the official criteria for evaluating


UNCLASSIFIED information sources.
___b. No, I don't know of official criteria for evaluating
UNCLASSIFIED information sources.
___c.. No, I don't know the official criteria for evaluating
UNCLASSIFIED information sources.

___d. Yes, I know the official criteria for evaluating CLASSIFIED


information sources.
___e. No, I don't know of official criteria for evaluating
CLASSIFIED information sources.
___f.. No, I don't know the official criteria for evaluating
CLASSIFIED information sources.

KEY ISSUE: ANALYST’S OBJECTIVITY AND WELL KNOWN TITLES

33
Survey, question 5.

29
Discussions with analysts and the literature review indicated that well-

known publication titles are perceived as more credible than obscure titles,

even though the analysts may have never seen the well-known titles.

Therefore, to determine how objective analysts are, question 7a through 7m

asked analysts to evaluate the credibility of 13 sources based only on their

titles. This key issue was answered by comparing the well-known titles in

survey questions 7a, b, c, j, k, l, and m, with obscure titles in survey

questions 7d, e, f, g, h, and i. Question 7 asked:34

7. How credible are the following information sources given only their
titles? Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

Well-known Titles:
a. NY Times
b. Washington Post
c. Harvard.edu Web site
j. NationalGeographic.com Web site
k. JanesDefenseWeekly.com Web site
l. InformationWeek.com Web site
m. DowJonesInteractive.com Web site

Obscure Titles:
d. RussianArmy.ru, Web site in Russian
e. RussianArmy.ru Web site in English
f. IsraelIndependentNews.is Web site in Hebrew
g. IsraelIndependentNews.is Web site in English
h. FrenchIndependentNews.fr Web site in French
i. FrenchIndependentNews.fr Web site in English

34
Survey, questions 7a – 7l..

30
However, there was a problem with how this question was structured

and the findings may not be valid. Judging from the comments in the

surveys, it was evident that analysts were not able to make credibility

judgments for many sources based on titles alone either because they had

personal experience with the sources, which influenced their judgments, or

because they were unwilling to make an uninformed judgment based on titles

alone.35

KEY ISSUE: FOREIGN LANGUAGE SOURCES

An issue related to source titles was, do analysts perceive foreign

sources published in their native language to be more credible than the

English language version of the same publications? This question was

answered by comparing survey questions 7d to 7e, and comparing 7f to 7g,

and comparing 7h to 7i. The validity of these questions was preserved by not

including any real publications or Web site titles, which the analysts may be

familiar with.36

7. How credible are the following information sources given only their
titles? Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

d. RussianArmy.ru, Web site in Russian

35
Survey, questions 7a – 7l.
36
Survey, questions 7d – 7i.

31
e. RussianArmy.ru Web site in English

f. IsraelIndependentNews.is Web site in Hebrew


g. IsraelIndependentNews.is Web site in English

h. FrenchIndependentNews.fr Web site in French


i. FrenchIndependentNews.fr Web site in English

KEY ISSUE: CLASSIFIED VS. UNCLASSIFIED SOURCES

Discussions with IC managers and consultants often included

statements such as, how do classified sources compare in credibility to

unclassified sources and less often, how do classified sources compare to one

another. This is a comparison that is likely to change over time. One JMIC

professor explained that different intelligence sources seem to go in and out

of favor as access success improves for one source or another. These issues

were only included in the IC Survey and most analysts answered as though

they had an opinion. Therefore, questions 7n, o, p, q, r, and, s. asked:37

7. How credible are the following information sources given only their
titles? Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

The intelligence sources in question included:

7n. HUMINT sources with no reporting record


7o. HUMINT sources with a proven reporting record
7p. IMINT, with National analysts annotations or comments
7q. IMINT, without National analysts annotations or comments

37
Survey, questions 7n – 7s.

32
7r. SIGINT reporting
7s. MASINT

Analysis of these questions included a calculation of the mode and

range for all sources included in question 7, and compared them to each

other. This provides an interesting comparison of classified and unclassified


38
sources.

ETHICS

The thesis survey relied on the truthful response from analysts

currently working in areas included in this survey. Such responses could be

critical of an analyst’s employer or profession; therefore, the thesis included

the following statement intended to protect the respondent’s anonymity.

PRIVACY:
You do not need to include your name; however, if you choose to
include your name, it will only be used by me to contact you if I need
more information regarding your comments. I will not quote you
directly unless you indicate in Questions 3 and 4 that I may do so.
Otherwise, only me and my Thesis Chairman, Professor Alex Cummins
… will have access to respondent names. Any record of the names in
association with the responses will be destroyed after the research is
completed, except those names included in the thesis with
permission.39

38
See Table 7, and Table 8 in the findings chapter.
39
Survey, Privacy.

33
CHAPTER 4

FINDINGS

This chapter first describes what was discovered in the literature

review that could answer the research question and the key issues. Then the

results of the survey are described , followed by how these results answered

the research question and the key issues. The survey determined what

criteria analyst use today to judge the credibility of an intelligence source,

which can be found in Appendix D. Even after consolidation, 148 separate

criteria were suggested by analysts, indicating little consistency in criteria, or

little understanding of the differences between data validity and source

credibility. Many of the suggested criteria appear to be measures of valid

data, or lists of known credible sources.40

The most significant result of the survey is the list of recommended

credibility criteria determined by surveying analysts’ opinions of criteria

suggested by experts in the literature review. Only two expert

recommendations were rejected by the surveyed analysts. The survey also

showed that analysts see only a small difference in the credibility of open

sources and classified sources.41 42

40
Survey, question 6.
41
Survey, questions 7a through 7s.
42
See Table 8 in findings chapter for comparison of classified and
unclassified source credibility.

34
Just as useful as the credibility criteria is the credibility scale

developed by benchmarking known credible and known non-credible Web

sites. The benchmarked sites determined the expected score of a credible

Web site. The survey results also determined a target level of credibility for

intelligence sources, which was converted to a percent of the credible

benchmark score on the credibility scale. The benchmarking of known

credible and non-credible Web sites validated the criteria and demonstrated

that credible sources can be identified on the Web.43

KEY ISSUE: OSINF RELEVANCE TO INTELLIGENCE

Although all experts agree that open source information (OSINF)

contributes to intelligence, how OSINF should contribute is still an open

debate. Steele suggests that analysts should reference OSINF first, and then

classified sources, and presumably only then request further classified

collection to fill the intelligence gaps.44 This approach would acquire data

from the least expensive sources first. Steele calls for 5 percent of the

intelligence budget to be moved to support OSINF acquisition.45 He claims

this would increase timely intelligence by a magnitude. His comments

suggest an answer to the key issue how relevant is OSINF to intelligence.

Open sources include what is already publicly known about a subject, and

therefore should represent the background and context of any intelligence


43
See Appendix A, Benchmarked Web Site Evaluation Worksheet.
44
Steele, under “Part III.”
45
Steele, under “Part III.”

35
report, and should be considered before any classified collection is

attempted. Not to do so would potentially waste funds and possibly put

people at risk for information that may have been found in a foreign Web

site, foreign newspaper, or company brochure. These open sources can also

be used to corroborate classified intelligence, thus contributing to the

credibility of a classified source. Because classified resources are so much

more expensive than open sources, open sources should always be the first

choice, followed by classified sources if not available through open sources,

or if the open sources credibility cannot be determined or is determined to be

too low. Therefore, OSINF affects the cost of intelligence, the timely access

to information, the context of intelligence, the credibility of intelligence, as

well as the content.

Bowen’s recommendations to include subject-matter experts in the

intelligence collection cycle may be a practical way to implement the

evaluation process proposed by this thesis.46 Implemented community wide,

Bowen’s cadre of OS subject experts could produce a significant savings in

time and money spent by countless analysts’ attempting to sort the useful

credible information from the useless and non-credible information. I have

observed that every analyst who makes use of Web sites for open source

intelligence must rediscover which sites are useful and credible, even though

an expert at another agency or just down the hall may have already

evaluated the site. Also, when a Web site is recommended by one analyst to

another analyst, there is no consistent way to evaluate the Web site and

express that evaluation to other analysts. This research produced a

46
Bowen, under “Collection Strategy.”

36
methodology to evaluate Web sites and consistently communicate that

evaluation to other analysts.

RESEARCH QUESTION: HOW TO IDENTIFY CREDIBLE WEB SITES

Reva Basch’s Secrets of the Super Net Searchers is an essential source

of expert credibility criteria for Web sites, which were incorporated into the

thesis survey.47 The following recommendations from expert Web searchers

are comprehensive. Bob Bethune, a research consultant in Ann Arbor, told

Basch that one should evaluate Net sources the same as one would print

material.48 Bob Bethune explained that those tests include:

“Is this source of information direct or derivative?


Is it biased, and if so, in what way?
Can the claims made here be corroborated by independent
evidence?”49

• Bethune believes that every source is biased, and the better

sources do not hide the bias.50 Throughout Super Searchers

many of the researchers’ comments overlapped, indicating a

general agreement about some evaluation criteria. All of the

expert comments regarding credibility were consolidated and

reformatted into balanced questions for the thesis survey.

These comments included the following, which can be attributed

to one or more professional researchers:

47
Survey, questions 8a – 8r.
48
Basch, 9.
49
Basch, 9.
50
Basch, 9.

37
• Bias. The researcher must understand the source’s bias.51

• Objectivity. Are the author’s statements supported with reasoning


52
or facts? Even a bias author can compensate for his bias by including

competing reasoning and facts.

• Accuracy. Online sources are generally quicker than print media at

correcting errors.53 Even print sources include inaccurate information or

disinformation.54 I believe that this is significant because accuracy affects

credibility; therefore, Web sources should be more accurate and timely

than print media because the technology enables quicker revisions.

• Expert opinion.

• Rely on second party expert evaluation whenever possible, e.g.,

recommendations from professional associations, academic organizations,

subject experts.55

• Informal networks of colleagues with different areas of expertise

inform one another of credible sources.56

• Use second opinions to evaluate the accuracy of an author, which

can be done by posting related questions to appropriate news groups.57

51
Basch, 9, 15.
52
Basch, 31.
53
Basch, 48.
54
Basch, 9.
55
Basch, 31.
56
Basch, 31.
57
Basch, 31.

38
• Subject area Web pages created by subject librarians are a good

source of links to evaluated Web sites.58 I recommend evaluation sites

that explain their evaluation process.

• Gray literature (documents with limited distribution such as

company brochures, or equipment manuals), best located on the Web, is

often published by very credible sources, including governments, and

corporations, which can be good sources for factual data. Interpretation

of the data may require an expert.59 I suggest asking a subject-matter

expert to distinguish facts from advertising in corporate literature.

• Origin.

• How close is the source to the origin of the data? 60

• Discover the original source to avoid circular and false

corroboration.61

• Corroboration. Can the information be corroborated?62

Corroboration is only effective if it is from diverse sources. This is another

reason it is important to know the origin of the data.

• Current. Is the information current? 63

58
Basch, 139.
59
Basch, 40, 110.
60
Basch, 9.
61
Basch, 16.
62
Basch, 9, 96.
63
Basch, 132.

39
• Format. Is the source professionally formatted, indicating attention

to detail?64 Web publishing software has made professional formatting so

much easier than print publishing once was. For this reason, I would not

give Web site format the same weight as print media format.

• Association.

• What are the author’s affiliations, e.g., academic, industry, or

government?65 Although I think industry is often the best source for some

types of data, including scientific, industry is often biased toward their

own products or chosen technology.

• Commercial publications gain credibility if they are included in

Lexis-Nexis or Dialog.66 These are two information brokers who have a

reputation to protect by assuring that they are only associated with

credible sources.

• Reputation.

• What is the reputation of the author and publisher?67

• Web sites in the .gov domain are generally credible, as are

academic Web sites. However, Web site evaluators need to verify that an

academic Web page represents the institution and not just a student.68

64
Basch, 132.
65
Basch, 31.
66
Basch, 49.
67
Basch, 31.
68
Basch, 132, 224.

40
• Know which publishers, universities, or companies are well

respected in your topic area.69 These are likely to be credible sources, or

able to identify credible sources.

• Reputable publishers, well-known authors, and (peer) reviewed

publications are more credible than other sources.70

• Attribution.

• Does the source clearly identify its self and its purpose? 71

• Indications of the source include the text of the Web site, the name

of the Web server in the URL, and the directory name in the URL, which

may include the author’s name.72

• Attribution should include the institution and a person,

withinformation on how to contact the author.73

• I would also recommend viewing the Web site’s HTML source code

for revision dates, and statements of attribution not shown in the Web

site’s body.

• Motivation.

• Information has value; therefore, know why a source provides

information for free.74

69
Basch, 110, 137.
70
Basch, 32.
71
Basch, 16.
72
Basch, 140.
73
Basch, 140.
74
Basch, 77.

41
• The presence of a counter on a Web site indicates the author cares

that people know that other people like his site enough to visit it.75

However, I am aware that counters have also been used to falsely

indicate that a site is popular when it is not. Therefore, counters are

probable not a reliable indicator of anything. A more relevant indicator of

popularity is how many and which other Web sites include links to the

evaluated site. I suggest using Alta-Vista’s Link: command in the

Advanced Search area to determine this. A search of relative news groups

will also indicate what other people think of a Web site.

• Relativity. What is a good source for one purpose may be

insufficient for another purpose.76 This is another reason that I think

that Web sites are best evaluated by subject-matter experts. A

novice or generalist who evaluates a Web site for someone else

should indicate his own level of knowledge in the topic area. This

also relates to thesis survey question 9, which asked analysts to

evaluate how credible a source must be to use it for different

intelligence products.

All of the statements listed above from respected Internet searchers

contributed to the thesis survey question 8, which asked how much does

specific criteria contributed to the credibility of Web sites.

Alison Cooke’s Authoritative Guide to Evaluating Information on the

Internet included three areas: what is high quality information, how to find it,

and how to evaluate it. Each of these areas contributed to the development
75
Basch, 132.

76
Basch, 133.

42
of relevant questions in the thesis survey. On the topic of high-quality

information, Cooke explains that some of the most common problems with

the Internet include:77

• information overload

• too much useless information

• potentially inaccurate material

• outdated material

Publishing has become so easy that researchers must comb through

thousands of supposedly related Web pages returned by search tools, which

do not even include, databases, news services, and FTP sites. The citation

search engines are of no help in determining quality, or relevance. Most

search engines are only an index of Web pages found.

Cooke explains that without the filtering provided by commercial and

academic publishers, people publish because they can, not because they

have something useful to share.78 I have observed that this is a serious

problem because it camouflages the useful information and requires a great

amount of time to sort through. A useless site can have all the gloss, format,

and authoritative “lingo” of a useful site, yet have no useful content.

Cooke contends that accuracy is perhaps of most concern to

researchers and professionals. As an example of the accuracy issue, Cooke

explains that of forty WWW medical sites evaluated, only four included the

advice close to the authoritative published recommendations.79 I believe that

this level of inaccuracy is possible because Web authors are their own editor
77
Cooke, 89.
78
Cooke, 12.
79
Cooke, 62.

43
and publisher, allowing no opportunity for critical review which most scholars

and professionals welcome.

Methods for finding data on the Web are unique to the Web and online

sources. Cooke explains in great detail the advantages and disadvantages

of:

• search engines

• review and rating services

• subject catalogs and directories

• subject-based gateway services and virtual libraries

Cooke explains that search engines such as Excite and Lycos (or

AltaVista, which is still solvent) are comprehensive, unfocused, have poor

relevance ranking, and are not useful for finding nor evaluating sources for

quality. They are also generally limited to Web sites and index every page on

every site, further multiplying the number of results per query.80 I have

observed that some search engines such as Google have resolved this

multiple indexing of a single site by displaying only the first indexed page,

unless one requests more.

Cooke also writes that subject catalogs and directories such as Yahoo

and Galaxy are more useful because site authors write the site descriptions;

catalog experts choose the hierarchy category to place the site; and only

sites are indexed, not every page. However, these sites are still very large,

and because the indexing is done by people rather than machines, as is the

80
Cooke, Chapter 2.

44
case with search engines, Web site directories are not revisited as often and

may become outdate.81

Cooke also wrote that rating and reviewing services use different,

usually unpublished criteria for rating the best sites. These include

Encyclopaedia Britannica’s Internet Guide and Lycos Top 5 percent.82 These

are even better yet for finding high-quality sources because a person other

than the author has reviewed the site based on some criteria. However,

these criteria are targeted to a general audience, not the academic or

professional. Higher weight may be given to organization and graphics, than

for content or accuracy, and the evaluators are not subject-matter experts.83

Cooke believes that the best place to find high-quality sources is from

subject-based gateway services and virtual libraries. These facilities are

designed by librarians or subject-matter experts, and use common indexing

methods used in libraries. They are often subject-matter specific and site

descriptions are evaluated and described by subject-matter experts.84

The last section of Cooke’s book gives checklists of evaluation criteria

for several internet source types. The criteria can be used for overall

evaluation of Web sites, not specifically for credibility as this thesis does.

Cooke’s criteria are based on surveys of hundreds of internet users, and were

81
Cook, Chapter 2.
82
Cook, Chapter 2.
83
Cooke, Chapter 2.
84
Cooke, 92.

45
validated by professional librarians. The unique evaluation criteria for each

type of Web site are fully described.

The source types described in this book, with general evaluation

criteria, included:

• organizational WWW sites

• personal home pages

• subject-based WWW sites

• electronic journals and magazines

• image-based and multimedia sources

• USENET newsgroups and discussion groups

• databases

• FTP archives

• current awareness services

• FAQs

Criteria for assessing an organizational Web site should include the

authority and reputation of the institution within its field, as well as the date

the page was last updated.85 Criteria for a subject-based Web site include

the purpose of the site, comprehensiveness, and whether the page includes

pointers to other sources for more information.86 Evaluation criteria for

electronic journals and magazines include the site’s authority and reputation

as well as whether the site has been referenced by a known reputable journal

85
Cooke, 90.
86
Cooke, 97.

46
that filters its own articles for accuracy.87 These criteria were included in the

survey questions for this thesis.

SURVEY FINDINGS, CREDIBILITY CRITERIA

The primary purpose of the thesis survey was to identify criteria for

assessing the credibility of a Web site. The recommended credibility criteria

were determined by a multi-step processes. First, all credibility criteria

recommended by experts in the literature review were listed, and then

consolidated. Then the consolidated list of expert criteria were included in

the thesis survey to industry and intelligence analysts as questions 8a

through 8r. Those criteria, which analysts most often gave a credibility value

of 50 percent or higher, were then listed as recommendations. Note that

only three criteria were rejected as credible by 50 percent or more

respondents. The first two were not recommended by experts, but were

added to assess the basic knowledge of respondents and as control

questions, which were not expected to be accepted by respondents.

Rejected criteria included:

8d. Listed in a search engine such as AltaVista.


8e. Listed in a Web directory organized by people, such as Yahoo.
8r. Professional writing style of Web page

Then the mean credibility (average analyst chosen score) was

calculated for each recommended criteria from question 8. The mean then

became the relative value or weight for each criterion.

87
Cooke, 98.

47
The criteria recommended in survey question 6 were then listed, and

consolidated. The methodology planned to add to the list of recommended

criteria from question 8, those criteria from question 6 that were not already

on the recommended list, and that had a mode occurrence of 50 percent or

greater (at least half the analysts listed the criterion). Surprisingly, there

were no criteria recommended by half or more of the respondents in the

open survey question number 6. The criteria that were mentioned most

often were: corroboration (28 occurrences), bias (14 occurrences), reputation

of the source (10 occurrences), source’s authority or credentials (8

occurrences), and presentation (7 occurrences).88 However, each of these

most-often suggested criterion, except source authority, were also suggested

by published experts discussed in the literature review, and were recommend

by 50 percent or more of respondents when ask about those specific criterion

in survey questions 8a-8r. Therefore, no additional criteria were added from

question 6.

Therefore, Table 1 below includes the results of the criteria surveyed,

the relative values of each criterion, and which criteria were chosen for

recommendation.89

Table 1. Question 8a to 8r, Recommended Criteria and Relative Values (Mean).


(a)
Number of
Cases
Criteria Valid Missin Mean M Recommended
g ode

88
See Table 15. Survey Question 6: Personal Criteria Analysts Currently Use
to Determine Credibility.
89
Survey, questions 8a – 8r.

48
8a. Recommended by 66 0 4.94 5 Yes
subject-matter expert in the
topic of the Web page.
8b. Recommended by a 65 1 3.65 4 Yes
generalist.
8c. Listed by an Internet 63 3 3.56 4 Yes
subject guide that evaluates
Web sites.
8d. Listed in a search engine 64 2 2.39 1 No
such as AltaVista
8e. Listed in a Web directory 62 4 2.65 2 No
organized by people, such as
Yahoo.
8f. Content is perceived 64 2 3.78 5 Yes
current.
8g. Content is perceived
accurate. 63 3 4.56 5 Yes
8h. A peer or editor reviewed 65 1 4.52 5 Yes
the content.
8i. Content's bias is obvious. 65 1 3.06 4 Yes
8j. Author is reputable. 64 2 4.64 5 Yes
8k. Author is associated with 65 1 4.42 5 Yes
a reputable organization.
8l. Publisher or Web host is 65 1 4.02 5 Yes
reputable.
8m. Content can be 65 1 5.17 5 Yes
corroborated with other
sources
8n. Other Web sites link to, or 65 1 3.68 5(b) Yes
give credit to the evaluated
site
8o. Server or domain is 65 1 3.45 4 Yes
copyrighted or trademark
name, like IMB.com.
8p. Statement of attribution. 64 2 3.78 5 Yes
8q. Professional appearance 65 1 2.86 4 Yes
of Web site.
8r. Professional writing style 64 2 3.16 3 No
of Web page.
(a) (a) Table Explanatory Notes. Mode Values: 1=0 percent, 2=10 percent, 3=25
percent, 4=50 percent, 5=75 percent, 6=100 percent credible. Mode is the
most-often chosen score respondents gave each criterion. Only modes of 50
percent credible and higher are recommended. The Mean is the average score
respondents gave each criterion. The Mean is assigned to each recommended
criteria as their relative values which are latter summed when evaluating a Web
site.
(b) (b) Multiple modes exist. The smallest value is shown

The last step of the processes to identify commonly agreed-upon

credibility criteria and to assign relative weights, involved applying the

49
recommended criteria to known credible, and known non-credible Web sites,

to establish benchmarks and a relative credibility scale. Three credible sites

known to the author or recommended by a subject expert were evaluated to

establish the high-end of the relative credibility scale. The relative values of

each criterion that the site satisfied were then summed for the site’s relative

credibility score. Then the average of the three credible Web sites was

calculated as the benchmark credible score. See Appendix A for the

evaluation worksheets, and detailed evaluation for these Web sites.

It was surprisingly easier to find known credible Web sites to evaluate

than it was to find known non-credible Web sites to evaluate. This was

because it did not seem useful to benchmark a Web site so obviously non-

credible that no analysts would consider using it, negating the need for an

evaluation at all. Due to this difficulty, only one non-credible Web site was

evaluated. Due to concerns about potential libel claims, this non-credible

Web site will be referenced here by the pseudonym “KoreanNewsSite.” The

KoreanNewsSite was selected because the author had evaluated this site for

a previous research paper and had found it non-credible, and yet a challenge

to evaluate. The challenge to evaluating it came from its mix of very credible

links, unknown contributing authors, and non-credible articles by the

publisher. The key points that made the publisher’s articles non-credible

included a general lack of authoritative citations to source documents, lack of

dates on the articles, a distinct bias camouflaged by corroborative facts, and

inaccuracies. Relative newsgroup discussions indicated that the publishing

author had a poor reputation for these same reasons.

50
The figures below represent the relative credibility scale and how these

benchmarks were determined. Based on these evaluations, a very credible

Web site should rate a relative credibility score of about 46.75, and a non-

credible site should rate a relative credibility score of about 7.46.

51
Benchmark Credible Web sites Evaluated Score
Spot Image Corporation, www.spot.com 43.19
International Telecommunications Union, www.itu.int 48.24
NY Times On the Web, nytimes.com 48.82
Average Score 46.75

Benchmark Non-credible Web site Evaluated Score


KoreanNewsSite 7.46

Relative Credibility Scale:


46.75 = Very-Credible
7.46 = Non-credible

SURVEY FINDINGS, CREDIBLE ENOUGH FOR INTELLIGENCE USE

As discussed in the methodology chapter, having a relative scale is

useful from an academic perspective; however, to be of practical use, the

analysts must also know what the target or required level of credibility is for

a source he would like to use in an intelligence product. The required level of

credibility for intelligence sources was determined by survey questions 9a –

9f, which asked:90

“How credible must an intelligence source be to use its data in the


following intelligence products?”

7) No Opinion
6) 100 percent Credible
5) 75 percent Credible
4) 50 percent Credible
3) 25 percent Credible
2) 10 percent Credible
1) 0 percent Credible

9a. Research, or topic summaries


9b. Current, day-to-day developments
9c. Estimative, identifies trends or forecasts opportunities or threats
9d. Operational, tailored, focused to support an activity
9e. Scientific, or technical, in-depth, focused assessments

90
Survey, questions 9a – 9f.

52
9f. Warning, an alert to take action

The following calculations were used to determine the product-

credibility level for six types of intelligence products. The mode was

calculated for survey questions 9a – 9f. The mode is the most-often chosen

required level of source credibility. The statistics indicate that most analysts

believe that all types of intelligence products require that sources be 75

percent credible.91 This was a surprise because the author expected to see a

greater variance in the required levels of source credibility, with warning

intelligence requiring the least credibility and in-depth focused assessments

requiring the greatest level of credibility. This presumption was based on the

belief that analysts require less information about an imminent threat than

they do about a future scientific or political condition, because the potential

impact of ignoring the least threat is so much greater than ignoring the most

significant emerging scientific or political condition. Apparently, most

analysts do not understand the relationship of intelligence products to

outcomes, or the survey question was flawed.

However, using the survey results, the sources of all intelligence

products should be 75 percent credible. If the most credible Web sites have a

relative-credibility score of 46.75 as demonstrated above, then intelligence

products should be 75 percent of that, which is 35.06. Therefore, the target-

credibility level of any intelligence source is 35.06, as evaluated by the

recommended credibility criteria. The following table shows the most-often

chosen (mode) required credibility level for intelligence products.

91
See Table 2.

53
Table 2. Questions 9a-f. Required Level of Source Credibility for
Intelligence Products.92
Number of Required Credibility
Cases
Valid Missing Mode Range
(b) percent percent
9a. Research, special topic 35 31 50 0-100
summaries percent(a percent
)
9b. Current, day-to-day 35 31 75 0-100
developments percent percent
8c. Estimative, identifies trends or 35 31 75 0-100
forecasts opportunities or threats percent percent
9d. Operational, tailored, focused, to 35 31 75 0-100
support a military, intelligence, or percent percent
diplomatic activity
9e. Scientific or technical, in-depth, 35 31 75 0-100
focused assessments of trends or percent percent
capabilities
9f. Warning, an alert to take action 35 31 75 0-100
percent percent
Required-credibility level for all 75
Intelligence Product Sources percent
(a) Multiple modes exist. The smallest value is shown. Just as many
respondents chose 75 percent.
(b) Missing responses are primarily because non Intelligence Community
personnel were not asked these questions in the survey. Mode is based on
valid responses.

SURVEY FINDINGS, OFFICIAL CREDIBILITY CRITERIA

Question 5 asked, “Does your organization have official criteria that

you are told to use for determining the credibility of any source? "Any source"

means published, proprietary, and classified sources.”93 The purpose of this

question was to determine if analysts are aware of credibility criteria that

they can use to ensure a consistent quality of reporting. The assumption


92
Survey, questions 9a – 9f.
93
Survey, question 5.

54
here is that only criteria formally sanctioned by the organization are likely to

be consistently followed. As the table below indicates, 86.2 percent of

analysts are either not aware of official credibility criteria or do not think such

criteria exist in their organization for unclassified sources, and 70.4 percent

are unaware of criteria for classified sources.

Many analysts commented that they rely on their own or other expert

opinions to determine source credibility, and official criteria are not needed.

In many cases this may be true; however, in a large organization there are

many levels of expertise, and without criteria and standards overall reporting

takes on the credibility of the least qualified analyst. Without such criteria,

analysts cannot even intelligently discuss credibility because there is no

common vocabulary to do so. The words credibility, reliability, and validity

are used interchangeable with no consensus on the definitions. Credibility

certainly means different things to HUMINT (Human Intelligence) analysts

than it does to IMINT (Imagery Intelligence) analysts. Tables 3 and 4 below

demonstrate that the vast majority of analysts are not aware of source

credibility criteria in their organizations.

55
Table 3. Question 5. Part 1, Official Criteria for Unclassified
Sources.94
Number Percen Valid Cumulative
of Cases t Percen Percent
t
No, I don't know the 49 74.2 75.4 75.4
official criteria for
Unclassified sources
No, I don't know of 7 10.6 10.8 86.2
official criteria for
Unclassified sources.
Yes, I know the official 9 13.6 13.8 100.0
criteria for Unclassified.
sources
Total 65 98.5 100.0
Missing 1 1.5
Total 66 100.0

Table 4. Question 5. Part 2, Official Criteria for Classified


Sources.95
Number of Cases Percen Valid Cumulativ
t Percent e Percent
No, I don't know the 3 4.5 11.1 11.1
official criteria for
classified sources.
No, I don't know of 16 24.2 59.3 70.4
official criteria for
classified sources.
Yes, I know the official 8 12.1 29.6 100.0
criteria for classified
sources
Total 27 40.9 100.0
Missing, none 31 47.0
applicable cases
Missing, left blank 8 12.1
Total Missing 39 59.1
Total 66 100.0

94
Survey, question 5.
95
Survey, question 5.

56
SURVEY FINDINGS, OBJECTIVITY AND FOREIGN LANGUAGE SOURCES

The author suspected that well-known sources were considered more

credible than obscure sources, even if the analysts had never observed the

well-known sources. Survey question 7 was designed to answer this issue.96

However, because many analysts had first hand knowledge of many of the

well-known sources, their evaluations were biased, and could not be made on

knowledge of the titles alone. Therefore, this question of whether analysts

are biased toward well-known sources, regardless of their personal

knowledge of the sources, remains unresolved.

However, it may be useful to know that most often analysts believed

that the well-known sources are credible, on a scale of 1-to-7 that equates to

a 5, and obscure sources, which no one is likely to have personal knowledge

of, were most often rated as undecided, which is a 4 on a scale of 1-to-7.97 It

is interesting to note that many analysts, who chose undecided for both the

well-known and obscure sources, explained that they were unable to decide

because of a lack of knowledge about the sources. This is a positive

indication that analysts do not assume that a source is credible because they

have heard of it, but never seen it. No analysts rated any open source as

Certainly True, and only one source, JanesDefenseWeekly.com Web site, was

most often rated as Strongly Credible.

96
Survey, question 7.
97
See Table 5 and 6.

57
Also, there was no significant difference in the rating given to native-

language Web sites versus foreign Enlish-language Web sites.98 This was an

issue because in discussions with analysts before the survey, some analysts

said that they had observed a difference in the content of the native-

language and English-language versions of the same Web sites.99 If this is an

issue, it is apparently not one many analysts have observed.100 Question 7a-

m asked:101

7. How credible are the following information sources given only their
titles? Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

Table 5. Questions 7a, b, c, j, k, l, m, Credibility of Well-Known


Titles.102
Number of Mean Mode Std.
Cases Deviation
Valid Missing
Q7a. Credibility of NY Times 64 2 5.218 Credible .8061
8
Q7b. Credibility of Wash 65 1 5.107 Credible .8315
Post 7
Q7c. Harvard.edu Web Site 63 3 4.761 Undecide .8560
9 d
98
Survey, questions 7d – 7i.
99
See Table 6.
100
See Table 6.
101
Survey, question 7a – 7m.
102
Survey, questions 7a , 7b, 7c, 7j, 7k, 7l, 7m.

58
Q7j. 61 5 5.114 Credible .8583
NationalGeographic.com 8
Web Site
Q7k. 64 2 5.343 Strongly .8207
JanesDefenseWeekly.com 8 Credible
Web Site
Q7l. InformationWeek.com 61 5 4.704 Credible .7152
Web Site 9
Q7m. 61 5 4.950 Credible .7622
DowJonesInteractive.com 8
Web Site
Overall Credibility 5.02 Credible
89

Table 6. Questions 7d, e, f, g, h, i, Credibility of Obscure Titles,


and Foreign Web Sites. 103
Number of Mean Mode Std.
Cases Deviation
Valid Missing
Q7d. RussianArmy.ru 60 6 3.850 Undecide .7089
0 d
Q7e. RussianArmy.ru in 60 6 3.666 Undecide .9144
English 7 d
Q7f. 56 10 4.071 Undecide .5345
IsraelIndependentNews.il 4 d
in Hebrew
Q7g. 59 7 4.118 Undecide .5597
IsraelIndependentNews.il 6 d
in English
Q7h. 57 9 4.140 Undecide .5154
FrenchIndependentNews. 4 d
fr Web Site in French
Q7i. 59 7 4.118 Undecide .5897
FrenchIndependentNews. 6 d
fr Web Site in English.
Overall Credibility 3.99 Undecide
42 d

SURVEY FINDINGS, CLASSIFIED VS. UNCLASSIFIED SOURCES

103
Survey, questions 7d, 7e, 7f, 7 g, 7h, 7i.

59
In any discussion of open source credibility within the Intelligence

Community the question arises, how do open source credibility compare to

classified sources. This issue affects the relevance of the research question

because if classified sources and unclassified sources have the same

credibility, then the IC should focus more on the cheaper source, which are

presumably the unclassified sources. However, the productivity of open

sources and classified sources would affect the cost of intelligence, which

may be a topic for another thesis. Survey questions 7a - 7s answered the

question of unclassified versus classified sources, although not as

conclusively as most analysts would hope. The average (mean) credibility

rating given to all classified sources was 5.0811 on a scale of 1-to-7, which

equates to credible; however, the most-often chosen (mode) rating was 4.0,

which is undecided.104 As the table below demonstrates, the mean and the

mode disagree.

Table 7. Questions 7n to 7s, Credibility of All Classified Sources.105


Scale: 1=Certainly False, 2=Strongly Non-credible, 3=Non-credible,
4=Undecided, 5=Credible, 6=Strongly Credible, 7=Certainly True.
Number of Mean Mode Std.
Cases Deviation
Valid Missing
Q7n. HUMINT sources 34 32 4.0294 4.00 .7582
with no reporting
record.
Q7o. HUMINT sources 34 32 5.3235 5.00 .5349
with a proven
reporting record.
Q7p. IMINT with 34 32 5.6176 6.00 .7791
national analysts
annotations or

104
See Table 7.
105
Survey, questions 7n – 7s.

60
comments.
Q7q. IMINT without 35 31 4.9429 4.00 1.0831
national analysts
annotations or
comments.
Q7r. SIGINT reports 35 31 5.5429 5.00 .8521
Q7s. MASINT 33 33 5.0303 4.00 1.0150
Overall Credibility 5.0811 4.00
Credibl Undecide
e d

This difference between mean and mode demonstrates the limitations

of statistics to answer questions meaningfully when there is too much

variance in responses. That was the case here. Statistically, classified

sources were rated credible (5.0811) on average, compared to unclassified

sources, which were rated undecided (4.55) on average. However, the most-

often chosen rating (mode) of both classified and unclassified sources was

undecided (4.0).

Table 8. Credibility of Open Sources Compared to Classified


Sources.106
Scale: 1=Certainly False, 2=Strongly Non-credible, 3=Non-credible,
4=Undecided, 5=Credible, 6=Strongly Credible, 7=Certainly True.
Mean Mode
Overall Credibility Obscure Open Sources 3.9942 4.0 Undecided
Overall Credibility Well-Known Open 5.0289 5.0 Credible
Sources
Overall Credibility of All Open 4.55 5.0
Sources Undecided Undecided
Overall Credibility of All Classified 5.0811 5.0
Sources Credible Undecided

This unusual variance is most evident with question 7q, “IMINT

(Imagery Intelligence) without national analysts’ annotations or

106
Survey, questions 7a – 7s.

61
comments.”107 As Chart 1 below shows, there is little consensus on the

credibility of IMINT without annotations, and the chart lacks the expected bell

curve, or ski slope variance.

16

14

12

10

6
Number of Cases

2 Std. Dev = 1.08


Mean = 4.9

0 N = 35.00
3.0 4.0 5.0 6.0 7.0

3=Incredible, 4=Undecided, 5=Credible, 6=Strongly Credible, 7=True

Graph 1. Question 7q, Credibility of IMINT Without Annotations.

Table 9. Question 7q, Credibility of IMINT Without


Annotations.108
Frequency Percent Valid Percent
Valid (7) Certainly True 3 4.5 8.6
(6) Strongly Credible 9 13.6 25.7
(5) Credible 7 10.6 20.0
(4)Undecided 15 22.7 42.9
(3) Non-credible 1 1.5 2.9
Total 35 53.0 100.0
Missin Not Applicable to 31 47.0
g Respondent
Total 66 100.0

107
Survey, question 7q.
108
Survey, question 7q.

62
CHAPTER 5

CONCLUSIONS

The research question was, how to identify credible sources on the

Web. The author hypothesized that this could be done by analysts who are

not expert in the subject of the Web site, by applying criteria identified by

expert Web searchers and judged by analysts to add at least 50-percent

credibility to a Web site. This hypothesis was proven by applying the

recommended credibility criteria to 4 Web sites of known credibility. The

credible sites scored high and the one non-credible site scored very low, even

though it appeared credible to a casual observer. The scores produced by

these benchmark Web sites then functioned as the high and low ends of a

relative-credibility scale. Other evaluated Web sites’ scores will likely fall

between the high and low ends of the credibility scale. The position of the

evaluated site on the scale then puts the site’s level of credibility in context.

The criteria and scale were useful on their own, but intelligence

analysts need to know how credible is enough; therefore, the thesis survey

also asked analysts to rate how credible a source should be to allow its use in

several types of intelligence products. The result was that sources for all

types of intelligence products should be at least 75-percent credible.

Therefore, if the most credible benchmarked Web sites score an average of

46.75 on the credibility scale, 75 percent of that is 35.06. The target-

credibility level for any intelligence source used in an intelligence report is

63
then 36.06 on a linear scale of 7.46 (Non-credible) to 46.75 (Very Credible).

The fact that the known credible Web sites scored high and the known poor

site scored very low validated the recommended credibility criteria and their

relative credibility scores (weights). It is also of interest that the top four

criteria suggested by surveyed analysts (survey question 6) were also among

the criteria suggested by experts in the literature review, and scored high

enough in the survey of expert criteria (survey question 8) to be included in

the recommended criteria of this thesis. These common criteria included:

corroboration, bias, reputation of the source or author, source authority, and

presentation (or professional appearance).

A worksheet is included in Appendix A, Table 14, which includes the

criteria, weights, and scale, which any analyst may now use to evaluate Web

sites and communicate that evaluation to other analysts and consumers.

These criteria are not without a weakness. Some criteria require a modest

amount of research by the evaluator and may be biased toward the

evaluator’s level of knowledge of the subject. As Bowen suggested, experts

will still make the better evaluators.109 Therefore, all evaluators should

include their own credentials on the evaluation sheets they share to maintain

a level of credibility in the evaluation process. This evaluation process could

be implemented community-wide if an Open Source Information System

(OSIS) participating organization were to adopt it and begin a virtual index of

evaluated sites to which any OSIS user could contribute. Such an index

would soon constitute an intelligence catalog of credible Web sites. It is clear

that such a intelligence catalog would save the all-source analysts many

109
Bowen, under “Collection Strategy.”

64
hours of research, multiply the knowledge of experts, add to the credibility of

open sources in intelligence products, and reduce the need for classified

research when the data is available from an obscure but credible Web site.

These criteria may also be useful to Web research instructors when

explaining the importance of knowing one’s source.

There were also several issues that may have affected the objectivity

of the survey results and the relative importance of some criteria. These

were identified in the thesis as key issues. The key issue of open source’s

relevance to intelligence is covered above. The survey clearly showed that

70 to 86 percent of analysts were not aware of official criteria for evaluating

intelligence sources. This could have detrimental affect on the credibility of

intelligence reporting. Although the author expected to find an analyst bias

toward well-known source titles, such a bias was not evident in the survey

results. Also, analysts did not generally believe that the English-language

version of foreign Web sites were less credible than the native-language

versions. Surprisingly, open sources and classified sources scored about

equal in their level of credibility. However, there was a wider range of

opinion on the credibility of classified sources than there was on the

credibility of open sources. This could be due to a wider range of knowledge

about the classified sources, although analysts were given the opportunity to

choose no opinion, which few chose. The final conclusion of this thesis is that

any analysts can discern credible Web sites from the non-credible by using

the recommended criteria Although evaluations are best done by subject-

matter experts, any analysts can evaluate a Web site using standard criteria,

65
which were recommended by expert researchers and approved by a broad

selection of analysts.

To implement these criteria, I recommend that DIA or CIA, which are

the primary all-source intelligence agencies, establish an OSIS Web site that

will index the Web site evaluation sheets completed by subject-matter

experts throughout the Intelligence Community. If both of these agencies

agreed to the criteria included here or other criteria, the rest of the

community would likely follow. This is a simple solution to a complex

problem, which would significantly reduce the duplication of Web site

evaluations throughout the Intelligence Community, and would provide a

great number of analysts the benefit of expert recommendations.

Alternatively, this open source index could be divided by subject area, and

volunteer subject-matter experts throughout the IC could evaluate Web sites

for their subject area alone. Volunteer subject guides have already been

used on the internet by Yahoo and other online companies. However, it is not

often clear how they evaluate Web sites. This IC index of Web sites would

contain established evaluation criteria, supported by recognized experts

either managing the indexes or contributing to them.

66
APPENDIX A

WEB SITE EVALUATION WORKSHEETS

This appendix includes the relative credibility scale, benchmark Web

site evaluation worksheets, and blank evaluation worksheet.

Credible Benchmark Web site Evaluated TOTAL

Spot Image Corporation, www.spot.com 43.19


International Telecommunications Union, http://www.itu.int 48.24
NY Times On the Web, http://nytimes.com 48.82
AVERAGE SCORE 46.75

Non-credible Benchmark Web site Evaluated TOTAL


Korean Web Weekly, kimsoft.com 7.46

Relative Credibility Scale


46.75 = Very Credible
7.46 = Non-credible

67
Table 10. Benchmark Web Site Evaluation Work Sheet, Spot.
Site Name, Address: Spot Image Corporation, http://www.spot.com
Evaluator Name , Expertise : Dax Norman, Intelligence Analyst,
Telecommunications, and Government
Criteria Comments Mean Satisfie
Score d
Criteria
?
8a. Recommended by Dr. Bowen, 4.94 YES
Recommended lecturer in War Studies at King’s
by subject- College London, article for
matter expert Jane’s Intelligence Review,
in the topic of 11/01/1999, “Open-source Intel:
the Web page. A Valuable National Security
Resource”.
8b. 3.65 NO
Recommended
by a
generalist.
8c. Listed by 3.56
an Internet
subject guide
that evaluates
Web sites.
8f. Content is 3.78 YES
perceived
current.
8g. Content is 4.56 YES
perceived
accurate.
8h. A peer or 4.52 NO
editor
reviewed the
content.
8i. Content's Biased toward accurate data. 3.06 YES
bias is obvious
8j. Author is Source rather than author is 4.64 YES
reputable. SPOT, CNES, the French space
agency and other satellite
companies.
8k. Author is Sources, rather than author are 4.42 YES
associated commercial satellite companies
with a who sell their products through
reputable the reputable organization,
organization. SPOT Imagery Corp. A search of
groups.google.com located
many favorable articles about
the company including a news
release from the newsgroups,
sci.space.news by
68
69
Table 11. Benchmark Web Site Evaluation Work Sheet, ITU.
Site Name, Address: International Telecommunications Union,
http://www.itu.int
Evaluator Name , Expertise : Dax Norman, Intelligence Analyst,
Telecommunications, and Government
Criteria Comments M Satisfied
ean Criteria?
Scor
e
8a. Recommended Used by the U.S. 4.94 YES
by subject matter Government.
expert in the topic of
the Web page.
8b. Recommended 3.65 NO
by a generalist.
8c. Listed by an 3.56
Internet subject
guide that evaluates
Web sites.
8f. Content is 3.78 YES
perceived current.
8g. Content is 4.56 YES
perceived accurate.
8h. A peer or editor Journal editors are listed. 4.52 YES
reviewed the
content.
8i. Content's bias is If there is a bias it is 3.06 YES
obvious. toward making the
telecommunications
market look better than it
is.
8j. Author is Authors are not always 4.64 NO
reputable. given, and little
information other than
ITU press releases can be
found about them, but
sources of contributing
data is always provided,
and includes reputable
sources such as
Vodaphone Group.

8k. Author is Many authors are 4.42 YES


associated with a associated with the ITU
reputable and many contributors
organization. are associated with
reputable companies or
Ministries of Post

70
Telephone and Telegraph.
This reviewer recognizes
that analysis found in
these sources may be
more positive than
reality.

8l. Publisher or Web The on-line journal is 4.02 YES


host is reputable. published by the ITU, as
are their other
publications.

8m. Content can be Content can often be 5.17 YES


corroborated with found in foreign Ministry
other sources . of PTT press releases,
company press releases,
and print versions of ITU
reports.

8n. Other Web sites The following search 3.68 YES


link to, or give located 47 Web pages
credit t the that link to the ITU News
evaluated Web site. Journal:
link:www.itu.int/journal
AND NOT
host:www.itu.int . The
following search located
15, 138 Web pages that
link to the main Web
page: link:www.itu.int
AND NOT
host:www.itu.int .

8o. Server or domain The ITU is recognized 3.45 YES


is copyrighted or world-wide.
trademark name,
like IMB.com.®
8p. Statement of 3.78 YES
attribution.
8q. Professional 2.86 YES
appearance of Web
site.
TOTAL 48.2
4

71
Table 12. Benchmark Web Site Evaluation Work Sheet, NY
Times.
Site Name, Address: NY Times On the Web, http://nytimes.com
Evaluator Name , Expertise : Dax Norman, Intelligence Analyst,
Telecommunications, and Government
Criteria Comments M Satisfied
ean Criteria?
Scor
e
8a. Recommended 4.94 YES
by subject-matter
expert in the topic of
the Web page.
8b. Recommended 3.65 NO
by a generalist.
8c. Listed by an 3.56
Internet subject
guide that evaluates
Web sites.
8f. Content is 3.78 YES
perceived current.
8g. Content is Yes, most of the time. 4.56 YES
perceived accurate. Errors are corrected
quickly online and given
fare space.

8h. A peer or editor 4.52 YES


reviewed the
content.
8i. Content's bias is 3.06 NO
obvious
8j. Author is There are many authors 4.64 YES
reputable. who are named and
generally are well
respected.

8k. Author is The NYTimes is 4.42 YES


associated with a recognized worldwide
reputable and has been called the
organization. Paper of Record for the
U.S.

8l. Publisher or Web Same as 8k. 4.02 YES


host is reputable.
8m. Content can be 5.17 YES
corroborated with
other sources .
8n. Other Web sites Yes. 256,177 other Web 3.68 YES

72
link to, or give credit pages were located that
t the evaluated Web link to this Web site.
site.
8o. Server or domain 3.45 YES
is
copyrighted or
trademark
name, like
IMB.com.®
8p. Statement of 3.78 YES
attribution.
8q. Professional 2.86 YES
appearance of Web
site.
TOTAL 48.8
2

73
Table 13. Benchmark Web Site Evaluation Work Sheet, Korea.
Site Name, Address: KoreanNewsSite
Evaluator Name , Expertise : Dax Norman, Intelligence Analyst,
Telecommunications, and Government
Criteria Comments M Satisfied
ean Criteria?
Scor
e
8a. Recommended 4.94 NO
by subject matter
expert in the topic
of the Web page.
8b. Recommended 3.65 NO
by a generalist.
8c. Listed by an 3.56
Internet subject
guide that
evaluates
Web sites.
8f. Content is Linked articles are one to 3.78 YES
perceived current. two weeks old but are still
relevant. However,
KoreanNewsSite’s
personal articles are not
dated.

8g. Content is According to many 4.56 NO


perceived newsgroup discussions,
accurate. KoreanNewsSite’s writing
is a mix of truth and half-
truths that give it the
appearance of accuracy.110

8h. A peer or 4.52 NO


editor reviewed
the content.
8i. Content's bias The site includes many 3.06 NO
is obvious. links to very good sources,
which masks the strong
anti-American and pro-
North Korean tone which is
revealed in the author’s
writing and his selective
choice of links and
previously published
articles.
110
Googles Groups, URL: < http://groups.google.com >, accessed 26 March
2001.

74
8j. Author is A search of the Google 4.64 NO
reputable. newsgroups found many
discussion threads I the
soc.culture.Korean
newsgroup, which used
KoreanNewsSite as a
standard comparison for
poor, biased reporting. 111

8k. Author is Same as 8j. 4.42 NO


associated with a
reputable
organization.
8l. Publisher or Same as 8j. 4.02 NO
Web host is
reputable.
8m. Content can Although many of the 5.17 NO
be corroborated articles this site links to
with other include corroborative
sources . information,
KoreanNewsSite’s own
articles do not cite
authoritative sources that
can be corroborated, and
usually do not include
source data at all. A
search for corroborating
data also did not succeed.

8n. Other Web A search of Altavista.com 3.68 YES


sites link to, or using the command link:
give credit t the KoreanNewsSite AND NOT
evaluated Web host: KoreanNewsSite.com,
site. located about 5,000 other
Web pages that link to
KoreanNewsSite.com .
There is clearly demand
for KoreanNewsSite’s style
of journalism.

8o. Server or 3.45 NO


domain is
copyrighted or
trademark name,
like IMB.com.®
8p. Statement of KoreanNewsSite attributes 3.78 YES
attribution. articles to the other
111
Google Groups.

75
authors, and himself.
KoreanNewsSite includes a
page which he claims to
describes himself;
however, there is some
doubt in the newsgroups
that his biography is true
because of inconsistencies
in dates and his claimed
age.

8q. Professional The site looks home-made. 2.86 NO


appearance of
Web site.
TOTAL 7.46

76
Table 14. Blank Web Site Evaluation Work Sheet.
Site Name, Address:
Evaluator Name , Expertise :
Scale: 46.75 = Very Credible, 7.46 = Non-credible
Target Score for Intelligence Sources: 35.06
Criteria Comments Mean Satisfied
Score Criteria?
1. Recommended by 4.94
subject-matter expert in
the topic of the Web
page.
2. Recommended by a 3.65
generalist.
3. Listed by an Internet 3.56
subject guide that
evaluates Web sites.
4. Content is perceived 3.78
current.
5. Content is perceived 4.56
accurate.
6. A peer or editor 4.52
reviewed the content.
7. Content's bias is 3.06
obvious.
8. Author is reputable. 4.64
9. Author is associated 4.42
with a reputable
organization.
10. Publisher or Web host 4.02
is reputable.
11. Content can be 5.17
corroborated with other
sources.
12. Other Web sites link 3.68
to, or give credit to the
evaluated Web site.
13. Server or domain is 3.45
copyrighted or
trademark name, like
IBM.com.®
14. Statement of 3.78
attribution.
15. Professional 2.86
appearance of Web site.
TOTAL
(Copyright: Dax R. Norman, 2001. Unlimited use is allowed with this statement
included.)

77
APPENDIX B

SURVEY TO INDUSTRY AND ACADEMIA

The following text is shown in its original format, except that

“incredible” was changed to “non-credible” to maintain consistency with the

thesis.

To: Professional Researchers and Analysts


From: Dax Norman, Joint Military Intelligence College Graduate Student

Please take about 20 minutes to answer the attached multiple-choice


questions, and E-mail your answers to dnorman@carr.org by 30 July 2001.
This is best done by clicking on Reply in your E-mail tool and include this
message. Please share this survey with any other analysts or researchers
who you think may be interested.

WHY:
I am conducting research for my Masters Thesis at the Joint Military
Intelligence College. To do this I need assistance from many other analysts
who use the Web professionally to find information. I understand that your
time is valuable and I sincerely appreciate your assistance. I hope to
demonstrate through this research which criteria is most widely accepted for
measuring the credibility of Web sites. Also, your comments will help me to
categorize the survey responses, and credibility criteria used by different
industries or professions regarding Web sites used by professionals across
various industries.

PRIVACY:
You do not need to include your name; however, if you choose to include your
name, it will only be used by me to contact you if I need more information
regarding your comments. I will not quote you directly unless you indicate in
Questions 3 and 4 that I may do so. Otherwise, only me and my Thesis
Chairman, Professor Alex Cummins (410-854-4605) will have access to
respondent names. Any record of the names in association with the
responses will be destroyed after the research is completed, except those
names included in the thesis with permission.

INSTRUCTIONS:

78
Instructions for responding to this research survey are included on page 2 of
the survey. My preferred method is to type an X next to your answers and E-
mail this survey back to dnorman@carr.org. This is most easily done by
clicking on "Reply" in your E-mail tool. Please use the comments area after
each question if you feel that you need to explain an answer. However, the
"Comment" should not be used as a response choice. Please don't conduct
any research or attempt to locate the Web sites mentions here.

Respectfully,

Dax R. Norman
Joint Military Intelligence College, Cohort 10

xyz-xyz-xyz work
xyz-xyz-xyz fax
dnorman@xyz.xyz home E-mail

Thesis Chairman, Professor Alex Cummins, XYZ-XYZ-XYZX

------------------------------------------------Page 1 Below

JMIC THESIS SURVEY: CREDIBILITY CRITERIA FOR WEB SITES


Industry Analysts and Researchers Participation is Requested.

Research Conducted by
Dax R. Norman
JMIC, Cohort 10

Research conducted in partial fulfillment of the requirements for the


Masters of Science in Strategic Intelligence Thesis.

2 July 2001

The views expressed in this paper are those of the author and respondents,
and do not reflect the official policy or positions of the U.S. Government.

------------------------------------------------Page 2 Below

JMIC THESIS SURVEY: CREDIBILITY CRITERIA FOR WEB SITES

PLEASE SEND RESPONSES TO:


Dnorman@xyz.xyz

or
Dax R. Norman, JMIC Cohort 10
1234 Blank Street RD

79
Blankville USA, 12345

or
XYZ-XYZ-XYZX Office Phone
XYZ-XYZ-XYZ Commercial Fax, Attn: Dax Norman

INSTRUCTIONS: Please complete as much of the following personal


information as you are willing. Your comments will help me to categorize the
survey responses and credibility criteria used by different industries or
professions regarding Web sites used by professionals across various
industries.

Please enter your contact data:

Your Name:
Industry Segment:
Phone Numbers:
E-mail:
Profession (What kind of work do you do?):
Date:

------------------------------------------------Page 3 Below
INSTRUCTIONS: Place an X next to your answers.

1. How do you rate yourself as an Internet user? Choose one:

___a. Expert (understand differences in search tools, and use special


features in tools.)
___b. Apprentice (know the differences in search tools, but don't use
special features.)
___c. Novice (don't know the difference between a search engine and a
directory/guide.)
Comments:

2. Do you use the Web for work related research?


___a. Daily
___b. Weekly
___c. Monthly
___d. Rarely
___e. Never
Comments:

Note: The phrase, "any source" means published, proprietary, and classified.

3. May I include your name and your responses in my Joint Military


Intelligence College thesis, which will not be public information? Choose one:
___a. Responses Only
___b. Name and Responses
___c. Neither

80
Comments:

4. May I include your name and your responses in publicly published articles
as a follow on to this thesis? Choose one:
___a. Responses Only
___b. Name and Responses
___c. Neither
Comments:

5. Does your employer have official criteria that you are told to use for
determining the credibility of any source? "Any source" means published,
proprietary, and classified sources. Choose one:
___a. Yes, I know the official criteria for evaluating information sources.
___b. No, I don't know of official criteria for evaluating information
sources.
___c.. No, I don't know the official criteria for evaluating information
sources.
Comments:

6. List up to five criteria that you use to determine the credibility of any
information source.
a.
b.
c.
d.
e.

------------------------------------------------Page 4 Below

7. How credible are the following information sources given only their titles?
Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

A. NY Times
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

81
B. Washington Post
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

C. Harvard.edu Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

D. RussianArmy.ru, Web site in Russian


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

E. RussianArmy.ru Web site in English


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

F. IsraelIndependentNews.is Web site in Hebrew


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

82
G. IsraelIndependentNews.is Web site in English
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

H. FrenchIndependentNews.fr Web site in French


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

I. FrenchIndependentNews.fr Web site in English


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

J. NationalGeographic.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

K. JanesDefenseWeekly.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

83
L. InformationWeek.com Web site
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

M. DowJonesInteractive.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

------------------------------------------------Page 5 Below

8. How much credibility does each of the following factors add the total
credibility of a Web site? Use the following scale:

___6) 100 percent Credibility


___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility

A. Recommended by a subject-matter expert.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

B. Recommended by a generalist.
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility

84
Optional Comments or Why This Choice:

C. Listed by an Internet subject guide that evaluates Web sites.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

D. Listed in a search engine such as Alta Vista.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

E. Listed in a Web-directory organized by people, such as yahoo.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

F. Content is perceived current.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

G. Content is perceived accurate.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

H. A peer or editor reviewed the content.


___6) 100 percent Credibility
___5) 75 percent Credibility

85
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

I. Content's bias is obvious.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

J. Author is reputable.
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

K. Author is associated with a reputable organization.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

L. Publisher, or Web-host is reputable.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

M. Content can be corroborated with other sources.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

86
N. Other Web sites link to or give credit to the evaluated site.
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

O. The server or Internet domain is a recognized copyrighted or trademark


name such as IBM.com ,
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

P. There is a statement of attribution.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

Q. Professional appearance of the Web site.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

R. Professional writing style of the Web site.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

END OF SURVEY
THANK YOU FOR TAKING THE TIME TO ASSIST ME WITH THIS PROJECT.

87
Dax R. Norman, JMIC Cohort 10
1234 Blank Street RD
Blankville USA, 12345

XYZ-XYZ-XYZX office
XYZ-XYZ-XYZX home
dnorman@xyz.xyz <mailto:dnorman@xyz.xyz>

88
APPENDIX C

SURVEY TO THE INTELLIGENCE COMMUNITY

The following text is shown in its original format, except that

“incredible” was changed to “non-credible” to maintain consistency with the

thesis. Also, internal contact phone numbers, and E-mail address were

removed.

To: Professional Researchers and Analysts


From: Dax Norman, Joint Military Intelligence College Graduate Student

Your assistance is respectfully requested.

Please take about 20 minutes to answer the attached multiple-choice


questions, and E-mail your answers to dnorman@carr.org by 10 Aug. 2001.
This is best done by clicking on "Reply" to sender in your E-mail tool and
include this message. Please share this survey with any other analysts or
researchers who you think may be interested.

WHY:
I am conducting research for my Masters Thesis at the Joint Military
Intelligence College. To do this I need assistance from many other analysts
who use the Web professionally to find information. I understand that your
time is valuable and I sincerely appreciate your assistance. I hope to
demonstrate through this research which criteria is most widely accepted for
measuring the credibility of Web sites. Also, your comments will help me to
categorize the survey responses, and credibility criteria used by different
industries or professions regarding Web sites used by professionals across
various industries.

PRIVACY:
You do not need to include your name; however, if you choose to include your
name, it will only be used by me to contact you if I need more information
regarding your comments. I will not quote you directly unless you indicate in
Questions 3 and 4 that I may do so. Otherwise, only me and my Thesis
Chairman, Professor Alex Cummins (410-854-4605) will have access to
respondent names. Any record of the names in association with the
responses will be destroyed after the research is completed, except those
names included in the thesis with permission.

89
INSTRUCTIONS:
Instructions for responding to this research survey are included on page 2 of
the survey. My preferred method is to type an X next to your answers and E-
mail this survey back to dnorman@carr.org. This is most easily done by
clicking on "Reply" in your E-mail tool. Please use the comments area after
each question if you feel that you need to explain an answer. However, the
"Comment" should not be used as a response choice. Please don't conduct
any research or attempt to locate the Web sites mentions here.

Please share this survey with any other analysts or researchers that you think
may be interested.

Respectfully,

R. Norman
Joint Military Intelligence College, Cohort 10

XYZ-XYZ-XYZX fax
XYZ-XYZ-XYZX other fax
dnorman@xyz.xyz home E-mail

Thesis Chairman, Professor Alex Cummins, XYZ-XYZ-XYZX

------------------------------------------------Page 1 Below

JMIC THESIS SURVEY: CREDIBILITY CRITERIA FOR WEB SITES


Industry Analysts and Researchers Participation is Requested.

Research Conducted by
Dax R. Norman
JMIC, Cohort 10

Research conducted in partial fulfillment of the requirements for the


Masters of Science in Strategic Intelligence Thesis.

8 July 2001

The views expressed in this paper are those of the author and respondents,
and do not reflect the official policy or positions of the U.S. Government.

------------------------------------------------Page 2 Below

JMIC THESIS SURVEY: CREDIBILITY CRITERIA FOR WEB SITES

PLEASE SEND RESPONSES TO:


Email Address Removed

90
or
Dax R. Norman, JMIC Cohort 10
1234 Blank Street RD
Blankville USA, 12345

or

xyz-xyz-xyzx Office Phone


xyz-xyz-xyzx Commercial Fax, Attn: Dax Norman
xyz-xyz-xyzx Other Fax, Attn: Dax Norman

INSTRUCTIONS: Please complete as much of the following personal


information as you are willing. Your comments will help me to categorize the
survey responses and credibility criteria used by different industries or
professions regarding Web sites used by professionals across various
industries.

Please enter your contact data:

Your Name:
Industry Segment/Organization:
Phone Numbers:
E-mail:
Profession (What kind of work do you do?):
Date:

Note: The phrase "any source" in this survey means both classified and
unclassified sources.

------------------------------------------------Page 3 Below
INSTRUCTIONS: Place an X next to your answers.

1. How do you rate yourself as an Internet user? Choose one:

___a. Expert (understand differences in search tools, and use special


features in tools.)
___b. Apprentice (know the differences in search tools, but don't use
special features.)
___c. Novice (don't know the difference between a search engine and a
directory/guide.)
Comments:

2. Do you use the Web for work related research? Choose one:
___a. Daily
___b. Weekly
___c. Monthly
___d. Rarely
___e. Never

91
Comments:

3. May I include your name and your responses in my Joint Military


Intelligence College thesis, which may not be public information? Choose
one:
___a. Responses Only
___b. Name and Responses
___c. Neither
Comments:

4. May I include your name and your responses in publicly published articles
as a follow on to this thesis? Choose one:
___a. Responses Only
___b. Name and Responses
___c. Neither
Comments:

5. Does your organization have official criteria that you are told to use for
determining the credibility of any source? "Any source" means published,
proprietary, and classified sources. Choose one:
___a. Yes, I know the official criteria for evaluating UNCLASSIFIED
information sources.
___b. No, I don't know of official criteria for evaluating UNCLASSIFIED
information sources.
___c.. No, I don't know the official criteria for evaluating UNCLASSIFIED
information sources.

Choose one:
___d. Yes, I know the official criteria for evaluating CLASSIFIED
information sources.
___e. No, I don't know of official criteria for evaluating CLASSIFIED
information sources.
___f.. No, I don't know the official criteria for evaluating CLASSIFIED
information sources.

Comments:

6. List up to five criteria that you use to determine the credibility of any
information source.
a.
b.
c.
d.
e.

ATTENTION PLEASE. Complete this page before reading the next page. The do
not return to this page.
------------------------------------------------Page 4 Below

92
7. How credible are the following information sources given only their titles?
Choose one from the following scale:

___7) = Certainly True


___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

A. NY Times
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

B. Washington Post
___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

C. Harvard.edu Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

D. RussianArmy.ru, Web site in Russian


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

93
Optional Comments or Why This Choice:

E. RussianArmy.ru Web site in English


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

F. IsraelIndependentNews.is Web site in Hebrew


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

G. IsraelIndependentNews.is Web site in English


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

H. FrenchIndependentNews.fr Web site in French


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

I. FrenchIndependentNews.fr Web site in English


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False

94
Optional Comments or Why This Choice:

J. NationalGeographic.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

K. JanesDefenseWeekly.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

L. InformationWeek.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

M. DowJonesInteractive.com Web site


___7) = Certainly True
___6) = Strongly Credible
___5) = Credible
___4) = Undecided
___3) = Non-credible
___2) = Strongly Non-credible
___1) = Certainly False
Optional Comments or Why This Choice:

------------------------------------------------Page 5 Below

8. How much credibility does each of the following factors add the total
credibility of a Web site? Use the following scale:

95
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility

A. Recommended by a subject-matter expert.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

B. Recommended by a generalist.
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

C. Listed by an Internet subject guide that evaluates Web sites.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

D. Listed in a search engine such as Alta Vista.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

E. Listed in a Web-directory organized by people, such as yahoo.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility

96
___1) 0 percent Credibility
Optional Comments or Why This Choice:

F. Content is perceived current.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

G. Content is perceived accurate.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

H. A peer or editor reviewed the content.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

I. Content's bias is obvious.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

J. Author is reputable.
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

K. Author is associated with a reputable organization.


___6) 100 percent Credibility

97
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

L. Publisher, or Web-host is reputable.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

M. Content can be corroborated with other sources.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

N. Other Web sites link to or give credit to the evaluated site.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

O. The server or Internet domain is a recognized copyrighted or trademark


name such as IBM.com ,
___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

P. There is a statement of attribution.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility

98
___1) 0 percent Credibility
Optional Comments or Why This Choice:

Q. Professional appearance of the Web site.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

R. Professional writing style of the Web site.


___6) 100 percent Credibility
___5) 75 percent Credibility
___4) 50 percent Credibility
___3) 25 percent Credibility
___2) 10 percent Credibility
___1) 0 percent Credibility
Optional Comments or Why This Choice:

9. How credible must an Intelligence source be to use its data in the following
intelligence products? Use includes when you would use qualifiers such as
"possible survived". Choose the required level of credibility for each type of
intelligence.

Scale:
___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

A. Research, or topic summaries.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

B. Current, day-to-day developments.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible

99
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

C. Estimative, identifies trends or forecasts opportunities or threats.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

D. Operational, tailored, focused to support an activity.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

E. Scientific, and technical, in-depth, focused assessments.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

F. Warning, an alert to take action.


___7) No Opinion
___6) 100 percent Credible
___5) 75 percent Credible
___4) 50 percent Credible
___3) 25 percent Credible
___2) 10 percent Credible
___1) 0 percent Credible
Optional Comments or Why This Choice:

END OF SURVEY
THANK YOU FOR TAKING THE TIME TO ASSIST ME WITH THIS PROJECT.

100
101
APPENDIX D

CRITERIA ANALYSTS CURRENTLY USE TO JUDGE CREDIBILITY

Table 15. Survey Question 6: Credibility Criteria Analysts Currently


Use.112
Criteria: Coun Percen Valid Cumulativ
t t Percent e Percent
corroborated 28 10.0 10.0 10.0
bias 14 5.0 5.0 14.9
reputation of source 10 3.6 3.6 18.5
source authority 8 2.8 2.8 21.4
presentation 7 2.5 2.5 23.8
date of information 6 2.1 2.1 26.0
original source 6 2.1 2.1 28.1
site owner 5 1.8 1.8 29.9
reputable publisher 5 1.8 1.8 31.7
current 5 1.8 1.8 33.5
reputable author 5 1.8 1.8 35.2
source knowledge of subject 5 1.8 1.8 37.0
content 4 1.4 1.4 38.4
motive 4 1.4 1.4 39.9
likelihood source would know the 4 1.4 1.4 41.3
information
reasonable 3 1.1 1.1 42.3
reliability 3 1.1 1.1 43.4
accuracy 3 1.1 1.1 44.5
author 3 1.1 1.1 45.6
established news service 3 1.1 1.1 46.6
source past reliability 2 .7 .7 47.3
intended audience 2 .7 .7 48.0
reputable source 2 .7 .7 48.8
expert recommendation 2 .7 .7 49.5
source identity 2 .7 .7 50.2
past reliability 2 .7 .7 50.9
cited by authoritative sources 2 .7 .7 51.6
internal consistency 2 .7 .7 52.3
nationality of author 2 .7 .7 53.0
cited by trusted source 2 .7 .7 53.7
112
Survey, question 6.

102
Table 15. Survey Question 6: Credibility Criteria Analysts Currently
Use.
Criteria: Coun Percen Valid Cumulativ
t t Percent e Percent
recommended by a trusted 2 .7 .7 54.4
source
past performance 2 .7 .7 55.2
official Web site 2 .7 .7 55.9
involved in current mission, 2 .7 .7 56.6
policy, or planning
level of detail 2 .7 .7 57.3
domain name extension 2 .7 .7 58.0
reasonable the source knows the 2 .7 .7 58.7
information
circumstances 2 .7 .7 59.4
amount of technical information 2 .7 .7 60.1
reputation of site 2 .7 .7 60.9
ePharmaceuticals 1 .4 .4 61.2
stated criteria for
inclusion of information 1 .4 .4 61.6
Dow Jones Newswire 1 .4 .4 61.9
FDA 1 .4 .4 62.3
PubMED 1 .4 .4 62.6
JAMA 1 .4 .4 63.0
reasonable with other credible 1 .4 .4 63.3
information
name of organization providing 1 .4 .4 63.7
information
interest 1 .4 .4 64.1
verifiable references 1 .4 .4 64.4
consistent with other information 1 .4 .4 64.8
corroborated by trusted source 1 .4 .4 65.1
fact based 1 .4 .4 65.5
several reports in different 1 .4 .4 65.8
languages
how other publications use the 1 .4 .4 66.2
source
resources 1 .4 .4 66.5
source available with IP access 1 .4 .4 66.9
via DoD intranet
reputation of publisher 1 .4 .4 67.3
classified corroborated by 1 .4 .4 67.6
unclassified
own knowledge 1 .4 .4 68.0
publication source 1 .4 .4 68.3
history of reliability 1 .4 .4 68.7
background supplied with 1 .4 .4 69.0
Website

103
Table 15. Survey Question 6: Credibility Criteria Analysts Currently
Use.
Criteria: Coun Percen Valid Cumulativ
t t Percent e Percent
long established 1 .4 .4 69.4
source is identified 1 .4 .4 69.8
site owner apparent 1 .4 .4 70.1
personal experience with the 1 .4 .4 70.5
source
utility 1 .4 .4 70.8
sources non-internet work 1 .4 .4 71.2
reputable 1 .4 .4 71.5
established print publications 1 .4 .4 71.9
publication reputation 1 .4 .4 72.2
past publications 1 .4 .4 72.6
author locatable 1 .4 .4 73.0
past validity 1 .4 .4 73.3
mainstream source 1 .4 .4 73.7
topic area audio and video 1 .4 .4 74.0
several reports on following dates 1 .4 .4 74.4
second opinion of site 1 .4 .4 74.7
reliability of author 1 .4 .4 75.1
willingness make corrections 1 .4 .4 75.4
published industry journal 1 .4 .4 75.8
industry analysts 1 .4 .4 76.2
financial analysts 1 .4 .4 76.5
domain names 1 .4 .4 76.9
proper attribution and dates 1 .4 .4 77.2
past accuracy 1 .4 .4 77.6
quality of information 1 .4 .4 77.9
proximity to origin 1 .4 .4 78.3
includes citations 1 .4 .4 78.6
past use of source 1 .4 .4 79.0
experience 1 .4 .4 79.4
publisher 1 .4 .4 79.7
media source 1 .4 .4 80.1
authoritative quotes 1 .4 .4 80.4
reasonable current 1 .4 .4 80.8
consistent with confidential 1 .4 .4 81.1
information
source past credibility 1 .4 .4 81.5
IP address 1 .4 .4 81.9
ISP 1 .4 .4 82.2
direct or indirect collection 1 .4 .4 82.6
associated with credible 1 .4 .4 82.9
organization
several reports in geographic 1 .4 .4 83.3
area newspapers

104
Table 15. Survey Question 6: Credibility Criteria Analysts Currently
Use.
Criteria: Coun Percen Valid Cumulativ
t t Percent e Percent
sources used 1 .4 .4 83.6
knowledge about source 1 .4 .4 84.0
information flow 1 .4 .4 84.3
cited by other analysts 1 .4 .4 84.7
source name recognizable 1 .4 .4 85.1
outstanding organization 1 .4 .4 85.4
is this their profession 1 .4 .4 85.8
author hobbyist or a professor 1 .4 .4 86.1
reliable past reporting 1 .4 .4 86.5
second party evaluation 1 .4 .4 86.8
witting or not 1 .4 .4 87.2
collection conditions 1 .4 .4 87.5
source background 1 .4 .4 87.9
relevancy 1 .4 .4 88.3
consistency 1 .4 .4 88.6
association with government 1 .4 .4 89.0
scholarly journals 1 .4 .4 89.3
government publications 1 .4 .4 89.7
educational reference source 1 .4 .4 90.0
stability of information 1 .4 .4 90.4
copyrighted material 1 .4 .4 90.7
attribution provided 1 .4 .4 91.1
reputable url 1 .4 .4 91.5
site owner corporate or 1 .4 .4 91.8
government
professional language 1 .4 .4 92.2
professional appearance 1 .4 .4 92.5
lists sources 1 .4 .4 92.9
personal contacts 1 .4 .4 93.2
industry consultants 1 .4 .4 93.6
veracity 1 .4 .4 94.0
relation to government 1 .4 .4 94.3
external consistency 1 .4 .4 94.7
trusted source 1 .4 .4 95.0
reasonableness 1 .4 .4 95.4
source past behavior 1 .4 .4 95.7
past use of deception 1 .4 .4 96.1
collection method 1 .4 .4 96.4
past reliability of source 1 .4 .4 96.8
subject 1 .4 .4 97.2
size of site 1 .4 .4 97.5
how facts are divulged 1 .4 .4 97.9
proximity to original source 1 .4 .4 98.2
sources job 1 .4 .4 98.6

105
Table 15. Survey Question 6: Credibility Criteria Analysts Currently
Use.
Criteria: Coun Percen Valid Cumulativ
t t Percent e Percent
first hand knowledge 1 .4 .4 98.9
experience with source 1 .4 .4 99.3
published by an organization 1 .4 .4 99.6
personal experience with subject 1 .4 .4 100.0
Total 281 100.0 100.0

106
BIBLIOGRAPHY

Alexander, Jan and Marsha Tate. “The Web as a Research Tool: Evaluation
Techniques.” Wolfgram Memorial Library, Widener University. Chester,
PA. URL: <http://www.science.widener.edu/~withers.evalout.htm. >
Accessed
13 March 2001.

Basch, Reva. Secrets of the Super Net Searchers. Wilton, CT : Pemberton


Press, 1996.

Bates, Mary Ellen. Presentation to WebSearch University Conference in


Reston, VA, 10 September 2001.

Bowen, Wyn, Dr. “Intelligence: A Valuable National Security Resource.” Jane’s


Intelligence Review. 1 November 1999. Dow Jones Interactive, “Publications
Library,” “All Publications,” Search Terms “Open Source Intelligence.” URL:
< http://djinteractive.com>. Accessed 4 March 2001.

Clift, A. Denis. Clift Notes: Intelligence and the Nation’s Security. Washington,
D.C.: Joint Military Intelligence College, 1999.

Cooke, Alison. Authoritative Guide to Evaluating Information on the Internet.


New York: Neal-Schuman Publishers, Inc., 1999.

Director of Central Intelligence. Director of Central Intelligence Directive


2/12. Washington, D.C.: n.p., 1 March 1994.

E-mail Survey. “Joint Military Intelligence College Thesis Survey: Credibility


Criteria for Web Sites.” Conducted by the author, July-August 2001.

G. & C. Merriam Co., Webster’s New Collegiate Dictionary. Springfield, MA: G.


& G. Merriam Co., 1975.

International Telecommunications Union. URL:< http://www.itu.int >.


Accessed
10 December 2001.

Joint Chiefs of Staff. Joint Pub 1-02. Department of Defense Dictionary of


Military and Associated Terms, URL:
http://www.dtic.mil/doctrine/jel/doddict/ data/f/02542.html. Accessed
13 February 2000.

KoreanNewsSite. URL: < http://KoreanNewsSite.com>. Pseudonym. Accessed


9 December 2001.

107
Nunnally, Jum C. Psychometric Theory. New York: McGraw-Hill Book
Company, 1967.

NY Times On the Web. URL: < http://nytimes.com >. Accessed 9 December


2001.

Simmons, Robert M., Major, USA. Open Source Intelligence: An Examination


of Its Exploitation in the Defense Intelligence Community. MSSI Thesis.
Washington, DC: Joint Military Intelligence College, August 1995.

Spot Image Corporation. URL:< http://www.spot.com > Accessed 10


December 2001.

Steele, Robert D. Intelligence and Counterintelligence: Proposed Program for the


21st Century. URL: http://www.oss.net/OSS21. Accessed 5 January 2000.

U.S. General Accounting Office. Using Structured Interviewing Techniques.


Gaithersburg, MD: GAO, June 1991.

108
ANNEX 1.

SURVEY RESULTS

(Not included in original thesis.)

The following survey results tables are included to enable other


researchers to perform their own analysis. These results may also serve as a
baseline for future comparison to new survey data. The column labeled
“response” represents each respondent who indicated in question 4 that his
responses may be used in publications for the general public. Those
respondents who answered “neither” to question 4 are not included in these
survey results. The number of respondents who answered no were very few
and can be deduced by counting the number of times a number is skipped in
the “response” column.

Questions that did not apply to respondents because the question only
applied to intelligence community analysts are represented by “999”. The
word “blank”, or “0” represents questions, which could have been answered
but were not.

It is important to remind the reader that the views expressed


in this paper and these survey results are those of the author and
the respondents and do not reflect the official policy or position of
the Department of Defense, or the U.S. Government, or the
respondents employers. This survey was administered during the
summer of 2001 by E-mail to participants who had first been
contacted by the researcher or associates of the researcher with in
government, industry, and academia.

109
RESPONCE CATEGORY SEGMENT PROFESSI Q1 Q2 Q3 Q4 Q5 Q5PART2
1 3 College Educator a a a a b 999
2 1 Intell IAnalyst b c a a b 999
3 2 Business Research a a b b b 999
4 2 InfoTech Executiv b b b b a 999
5 2 Research CompSci a b a a b 999
6 1 Defense Info Res a a b a a 999
8 1 Defense IAnalyst a b b a b 999
9 2 Health Research a a a a a 999
10 1 Defense IAnalyst a a b b b 999
11 3 College Educator b a a a b 999
12 1 Defense IAnalyst b b b a c 999
13 2 Finance CPA a a b a b 999
14 3 College Educator a b b b b 999
15 2 InfoTech blank a a a a b 999
16 2 Business Research b a b b c 999
17 1 Trade Linguist a a b b b 999
18 3 College Educator a b a a b 999
19 1 Defense IAnalyst a a b b b 999
20 1 Defense IAnalyst a a b b b 999
21 1 Defense CompSci a a b a b 999
22 1 Defense IAnalyst b a a a a 999
23 1 Defense IAnalyst b a b b b 999
24 2 Medicine Libraria a a a a a 999
25 2 Law Lawyer a a a a b 999
26 1 Defense IAnalyst a a b a a 999
27 1 Defense IAnalyst b c a a a 999
28 1 Defense IAnalyst a a b a b 999
29 1 Defense IAnalyst a a b b b 999
30 1 Defense Training a a b b c 999
31 1 Defense IAnalyst b a b b a 999
32 1 Intell IAnalyst a b b a b blank
33 1 Intell IAnalyst b b a a b e
34 1 Intell IAnalyst b b a a b e
35 1 Intell IAnalyst a b a a b d
36 1 Intell IAnalyst b a a a b e
37 1 Intell IAnalyst b d b b blank d
39 1 Intell IAnalyst a a a a b blank
40 1 Comms Research a a a a b e
41 1 Intell IAnalyst a a b a b e
42 1 Intell IAnalyst b a a a b e
43 1 Intell IAnalyst a a b b b e
45 1 Defense IAnalyst a a b b b blank
46 2 Intell IAnalyst a a b b b e
47 1 Defense Library b a a a b e
48 1 Defense IAnalyst b b b a b e
49 1 Intell IAnalyst b a b a c f
50 1 Intell IAnalyst b a a a b blank
51 1 Intell IAnalyst a a a a b e
52 1 Intell Library b a b b b d
53 1 Intell IAnalyst a a b b b d
54 1 Intell IAnalyst b d a a c f
55 1 Intell IAnalyst a a b a a d
56 1 Intell IAnalyst c d a a b blank
57 1 Intell IAnalyst b c b b b d
58 1 Intell IAnalyst a c a a b blank
59 1 Intell IAnalyst a a b a b d
60 1 Intell IAnalyst a d a a c f
62 1 Intell Physics a a a a b e
63 1 Defense IAnalyst a a b a b blank
64 1 Defense IAnalyst c d b b b e
65 1 Intell IAnalyst b c b b b blank
66 1 Intell IAnalyst b b a a b e
Q6
verifiable references, data verifiable, author expert in subject, author associated with recognized organization.
Author, fact based, credibility of publisher
resources, bias, interest, currency, verifiable
published industry journal, industry analysts, financial analysts, industry cosultants, personal contacts
lists references or sources, professional appearance, profossional language, referrenced by trusted source, recommen
author, publisher, original source, auhor's expertise
owner of site, presentation, content, coroborated, professional presentation
JAMA, PubMED, FDA, ePharmaceuticals, Dow Jones Newswire
own knowledge, author's credentials, cited by other authoritative sources, accuracy, presentation, corroborative
timeliness, authority of author, affiliation of author, original source, expert recommendation
several reports in different languages, several reports in target area newspapers, several reports on followig dates
locate author, past publications, pulication reputation, established print publications, site investors, corroborati
Site suppliers and funders, professionalism, content, bias, reputation
is ita a kown and trusted source and author, trusted recommendation, perceived bias, site ownership and sponsorship
source is identified, corroboration, accuracy, clarity, current
well known and respected, long established, background supplied with website, history of reliability, balanced and u
publication source, reputation of vendor
personal experience with the source, personal experience with subject, profesional appearance and utility, sources n
Corroboration, ownership, history of validity
Established news service, corroboration, secondary evaluation of primary site, mainstream source,
credentials, corroboration, common sense, reliability, climate surrounding activities
reputable news service, reliability of author, author's knowledge of subject, corroborative, willingness make correc
stated criteria for inclusion of information, author's authority, corroborative, stability of information, appropria
reputation of publication or site, corporate or government owner, reputation or credentials of author, cited by othe
Educational reference source, government publications, established news services, scholarly journals
association with government, reliability, veracity, motives, experience of source
national origin, relation to government, bias, internal consistency, external consistency
author, nationality of author, feasibility of information, date of information
presentation, published by an organization rather than individual, content, age of information, experience with sour
original source with first hand knowledge, involved in current mission, policy, or planning, level of involvement cu
how close to the original source, past reliability, reputation of source, likelyhod source would know the informatio
name of organization providing information, data of material, size of the site
data repeated in other sources, does it fit with other credible information, who is the information from (govt, medi
Author and sponsor of site, subject, amount of technical information, date of information
ability t verify information against other sources, past reliability of source, does unclassified information suppor
collection system, past use of deception, supportive information, source identity
easonableness, consistency with other information, trusted source
reliability from experience, accuracy by judgement of factual possibility, consistency, relavency
author's bias, age of information, corrobation with other sources
background of the source, past credibility, motive, corroboration, timeliness
corroborative, agreement with personal views expressed in private, environmental influences impacting source, check
past performance, authority of source, does the information correlate with known information
domain extension, author, media source, publishing house, experience
past use of source, source cited by a trusted source, source includes citations, known bias6
reputation, closeness to original source, corroboration, internal consistency, bias
does the source have access to the information provided, past reporting, bias, is it a source
reputation of source, specificity of source information, corroboration, quality of information
past accuracy, bias, general reliability on intelligence vs press, access of reporter to source information
known and reputable source, corroboration by trusted source, appropriateness of source for the informatin sought
past reliability, corroborative, known bias, target audeance, age of information, proper attribution and dates
blank
domain names, IP address, ISP, coroborative
scientific information including specifications, official company web sites
direct or indirect intelligence collection, collection conditions,who is original source, witting or not, motives, s
domain name extensions, already evaluated in other media, verifiable, information provided already know to be true,
reliability of past reporting, depth of information provided, reputation of source
blank
source name recognizable, reputain, commonly used source cited by other analysts or writers, source is reporting in
bia, bona fides, originality of information, corroboraton with independent sources, flow of information, generation
freshness of information, source's access to the information, motivation for providing the information, level of exp
reputation of author, what is known about the source, personal knowledge about the source, how other publications u
publisher or author of information, sources used, datga of the information, or published date, compare to other sour
Q7A Q7B Q7C Q7D Q7E Q7F Q7G Q7H Q7I Q7J Q7K Q7L
55 4 4 4 4 4 4 4 4 4 5
55 5 blank blank blank blank blank blank 6 6 blank
55 5 4 4 4 4 4 4 5 5 5
55 5 4 4 4 4 4 4 5 5 5
55 6 5 5 5 5 5 5 6 5 5
55 4 3 3 blank blank blank blank 7 6 5
66 6 4 4 4 5 4 4 6 5 5
77 5 4 4 4 4 4 4 6 4 5
66 6 blank 5 blank 4 5 5 6 6 5
55 4 blank blank blank blank blank blank 5 5 4
45 5 4 blank blank 4 blank 4 4 4 4
76 6 4 4 4 4 4 4 6 6 6
56 5 2 2 2 2 3 3 5 5 4
66 5 4 4 4 4 4 4 5 5 5
55 blank blank blank blank blank blank blank blank 5 5
55 6 4 4 4 4 4 4 5 5 4
54 5 4 2 4 5 4 5 5 6 5
66 6 4 4 3 3 5 5 7 6 5
55 4 3 2 4 4 4 4 6 7 5
#NULL! 5 5 1 1 4 4 3 3 4 6 5
55 4 4 5 5 5 5 3 5 4 4
65 6 5 5 6 5 6 5 4 6 6
55 5 4 4 blank 4 4 4 4 4 5
66 4 4 4 5 4 5 5 6 6 4
55 6 4 4 4 4 4 4 6 6 4
55 4 4 3 5 5 5 5 5 6 5
44 4 4 4 4 4 4 4 5 6 4
55 4 4 4 4 4 4 4 4 6 4
33 2 2 2 4 4 4 4 5 6 4
53 5 4 4 4 4 4 4 5 6 4
6 blank 6 3 3 4 4 5 5 blank 6 4
66 6 4 4 4 4 4 4 6 6 5
66 6 4 4 4 4 4 4 6 6 blank
55 4 4 4 4 4 4 4 5 5 5
54 5 4 4 blank blank blank blank 5 5 blank
35 4 4 2 4 5 4 4 5 5 4
66 4 4 4 4 4 4 4 6 6 6
65 4 4 4 5 5 4 5 5 7 6
65 4 4 4 4 4 4 4 6 6 4
55 4 4 3 4 4 4 4 5 5 4
44 4 4 4 4 4 4 4 4 4 4
64 4 4 5 4 5 4 3 6 7 7
54 4 4 4 4 4 4 4 4 5 5
66 6 4 4 4 4 4 4 6 6 5
66 blank blank blank blank blank blank blank blank 5 000
67 5 2 1 4 4 3 3 6 6 5
66 5 4 4 4 4 4 4 5 6 5
55 4 3 3 4 4 4 4 5 6 5
66 5 5 4 4 4 4 4 6 6 5
55 4 4 3 4 4 4 4 4 5 5
55 5 4 4 4 4 4 4 5 5 5
54 5 4 4 4 4 4 4 4 4 4
44 4 5 3 4 3 5 5 5 5 5
44 4 4 4 4 4 4 4 5 5 4
55 5 4 4 4 4 4 4 3 4 4
55 5 4 4 4 4 4 4 5 6 6
55 4 4 4 4 4 4 4 4 4 4
66 5 4 5 4 5 4 5 6 5 5
55 6 4 4 4 4 blank 4 4 5 4
55 5 4 4 4 4 4 4 5 5 4
#NULL! 5 blank blank blank blank blank blank blank blank blank blank
44 5 4 4 4 4 4 4 4 4 4
Q7M Q7N Q7O Q7P Q7Q Q7R Q7S Q8A Q8B Q8C Q8D Q8E
6 999 999 999 999 999 999 5 4 1 1 2
blank 999 999 999 999 999 999 6 6 5 1 2
5 999 999 999 999 999 999 3 2 3 1 2
5 999 999 999 999 999 999 5 3 4 5 5
5 999 999 999 999 999 999 5 5 4 1 1
6 999 999 999 999 999 999 5 4 4 4 4
4 999 999 999 999 99 999 5 4 3 2 2
7 999 999 999 999 999 999 5 4 5 4 4
5 999 999 999 999 999 999 5 5 4 3 3
5 999 999 999 999 999 999 5 3 4 1 1
4 999 999 999 999 999 999 4 3 #NULL! #NULL! 4
6 999 999 999 999 999 999 6 5 4 2 3
5 999 999 999 999 999 999 5 4 3 2 2
4 999 999 999 999 999 999 5 3 4 1 3
nu 999 999 999 999 999 999 4 0 0 0 0
5 999 999 999 999 999 999 5 4 5 5 5
5 999 999 999 999 999 999 5 4 5 3 3
5 999 999 999 999 999 999 5 5 4 3 3
4 999 999 999 999 999 999 4 2 2 2 #NULL!
4 999 999 999 999 999 999 6 1 3 3 3
5 999 999 999 999 99 99 5 5 4 4 5
5 999 999 999 999 999 999 6 5 1 1 1
5 999 999 999 999 999 999 6 4 4 4 5
6 999 999 999 999 999 999 5 5 3 2 2
4 999 999 999 999 999 999 5 3 4 3 3
5 999 999 999 999 999 999 5 4 3 3 3
4 999 999 999 999 999 999 3 1 1 1 1
4 999 999 999 999 999 999 5 4 4 4 4
4 999 999 999 999 999 999 5 5 5 4 4
5 999 999 999 999 999 999 5 2 1 1 1
6 6 5 5 6 6 5 4 3 0 1 1
6 5 5 6 5 5 6 6 1 3 1 3
5 4 5 6 4 6 4 5 4 5 1 3
5 4 5 5 4 4 4 4 2 2 1 1
5 4 5 5 4 6 blank 5 1 3 1 2
4 4 6 6 5 6 6 5 3 3 2 3
6 4 6 5 5 6 6 5 4 5 3 3
6 4 5 7 7 7 7 5 5 3 1 3
5 3 blank 6 6 6 5 5 3 3 1 2
4 3 5 5 6 5 5 4 3 3 2 2
blank blank 4 4 4 4 4 3 1 1 1 1
6 5 6 5 5 6 7 5 5 4 3 0
5 6 6 6 6 6 4 5 3 3 4 2
5 4 5 6 5 5 5 6 5 3 2 2
000 4 5 6 6 6 6 4 3 3 3 3
5 4 6 6 4 5 5 5 4 4 3 3
5 4 6 7 6 7 4 5 4 5 2 2
5 4 6 6 4 6 6 5 4 4 2 2
6 4 6 5 4 5 4 6 5 5 4 4
5 4 5 5 4 4 6 5 4 5 4 5
4 3 5 4 3 5 4 5 3 4 3 3
5 4 6 6 6 5 4 5 4 4 3 2
5 4 5 6 5 5 5 4 3 3 1 1
4 4 5 6 4 5 6 5 3 2 1 1
4 3 5 6 4 7 6 5 2 3 1 2
6 4 5 4 4 5 4 5 4 3 1 2
4 5 6 6 7 7 4 6 4 5 3 2
5 4 5 6 7 5 5 5 4 3 3 3
4 4 6 5 4 5 4 5 4 4 3 0
5 4 5 5 4 5 5 5 5 5 4 4
blank 5 6 7 6 6 blank 6 5 4 2 2
5 3 5 6 6 5 5 4 2 2 1 1
Q8F Q8G Q8H Q8I Q8J Q8K Q8L Q8M Q8N Q8O Q8P Q8Q
3 4 4 4 5 4 4 5 5 4 5 4
4 4 6 5 6 6 5 6 4 2 3 3
4 6 4 3 4 3 3 4 2 4 3 1
4 4 3 4 4 5 5 4 4 4 3 3
5 5 5 5 5 5 5 6 5 4 5 4
4 5 4 2 5 5 4 5 4 4 5 4
3 4 2 2 4 2 2 5 2 1 3 2
4 4 6 1 5 5 5 5 5 3 4 4
4 5 5 4 5 5 5 5 4 3 3 4
4 4 5 4 4 4 1 5 2 2 4 2
6 6 6 4 #NULL! 5 3 6 5 5 4 5
2 4 5 2 5 5 5 6 3 4 4 4
3 4 5 2 5 5 5 5 4 4 3 3
3 5 5 1 5 5 4 6 4 3 3 2
0 0 0 0 0 0 0 0 0 0 0 0
5 5 5 4 5 5 5 6 5 5 4 4
4 5 5 1 5 5 5 6 5 4 4 4
4 5 6 6 5 5 4 6 5 4 6 4
4 4 3 2 5 5 3 5 3 4 0 2
1 1 3 1 6 6 6 6 3 3 2 1
5 5 5 3 5 4 4 5 4 4 4 5
2 5 5 6 5 5 5 6 1 1 1 1
6 6 6 5 5 5 5 5 5 4 5 4
4 5 5 4 5 4 4 5 4 5 4 3
5 5 5 5 5 5 5 6 5 4 4 3
4 4 5 4 5 4 4 5 4 3 5 3
2 4 3 1 3 2 1 3 1 1 1 1
5 4 5 2 5 5 5 6 5 5 5 4
5 5 5 4 5 5 5 6 5 5 5 6
3 5 5 5 5 5 5 5 4 4 5 1
0 6 1 2 5 5 5 6 3 4 3 4
3 5 5 2 5 5 4 6 2 4 5 4
5 5 5 2 5 4 5 5 5 5 5 4
5 5 5 5 6 5 3 6 4 3 4 3
4 2 3 1 4 3 3 5 1 3 3 3
5 5 3 1 2 3 3 4 2 2 2 3
5 5 5 2 5 5 3 6 4 1 4 1
3 5 5 2 5 5 4 6 2 4 5 4
3 4 4 2 4 5 2 5 5 1 4 1
5 6 4 2 4 4 3 5 4 2 3 1
1 0 3 2 2 2 1 3 1 1 1 1
5 0 5 2 5 5 5 5 5 5 5 4
5 5 5 4 5 3 4 5 3 4 3 3
3 4 5 4 4 4 4 5 2 4 3 3
4 5 4 3 3 4 2 5 4 2 3 2
4 4 5 5 5 5 5 6 5 4 4 3
3 5 5 1 5 5 3 5 3 3 3 1
2 5 5 4 5 5 5 5 4 3 3 1
5 5 5 4 5 5 5 6 5 5 5 4
5 5 5 4 5 5 5 5 5 5 5 5
4 4 3 4 4 4 4 4 4 4 5 3
5 5 5 4 5 3 4 5 3 4 3 3
3 3 3 2 4 3 3 5 4 6 5 1
2 5 5 1 3 3 2 5 2 2 2 2
1 4 5 4 6 4 5 6 2 2 2 2
3 4 5 3 5 4 3 5 5 2 4 4
5 4 3 4 4 3 3 3 3 4 5 2
3 3 4 3 4 5 4 5 4 5 5 2
3 5 5 4 5 5 5 5 4 4 3 3
5 5 5 3 5 5 5 5 5 5 5 4
4 4 5 4 5 5 4 6 4 2 4 1
3 4 4 3 4 4 4 5 3 1 2 1
Q8R Q9A Q9B Q9C Q9D Q9E Q9F
2 999 999 999 999 999 999
3 999 999 999 999 999 999
1 999 999 999 999 999 999
3 999 999 999 999 999 999
5 999 999 999 999 999 999
4 999 999 999 999 999 999
3 999 999 999 999 999 999
4 999 999 999 999 999 999
4 999 999 999 999 999 999
2 999 999 999 999 999 999
4 999 999 999 999 999 999
4 999 999 999 999 999 999
3 999 999 999 999 999 999
3 999 999 999 999 999 999
0 999 999 999 999 999 999
5 999 999 999 999 999 999
4 999 999 999 999 999 999
5 999 999 999 999 999 999
3 999 999 999 999 999 999
3 999 999 999 999 999 999
4 999 999 999 999 999 999
1 999 999 999 999 999 999
4 999 999 999 999 999 999
4 999 999 999 999 999 999
2 999 999 999 999 999 999
3 999 999 999 999 999 999
0 999 999 999 999 999 999
5 999 999 999 999 999 999
6 999 999 999 999 999 999
3 999 999 999 999 999 999
4 4 5 5 5 4 5
4 5 4 5 3 5 2
4 3 3 5 5 5 5
4 6 4 6 5 6 4
3 4 4 4 4 5 5
3 4 4 5 5 4 4
1 5 5 4 5 3 6
4 6 5 5 6 6 6
2 5 3 5 4 7 2
1 4 3 4 5 5 6
7 5 5 5 5 5 1
4 5 7 4 5 5 5
1 1 1 1 3 1 1
3 4 4 4 5 5 7
3 4 4 4 5 4 5
3 6 5 5 5 5 4
2 4 5 4 5 7 5
2 5 5 4 4 3 5
4 7 5 5 6 5 6
5 4 3 3 3 3 2
3 6 6 6 6 6 6
1 3 3 2 3 1 4
1 7 7 7 7 4 7
2 5 3 4 5 5 3
2 7 7 7 7 7 7
3 4 4 3 5 7 4
2 6 6 6 6 6 6
3 4 5 5 6 5 5
3 5 5 5 5 5 4
5 5 4 4 5 4 4
2 5 5 5 5 5 5
1 4 2 5 1 5 2

Você também pode gostar