Você está na página 1de 29

INFORMATION SYSTEMS RISK FACTORS, RISK ASSESSMENTS, AND AUDIT

PLANNING DECISIONS

Jean C. Bedard

Cynthia Jackson

Both at:
College of Business Administration
404 Hayden Hall
Northeastern University
Boston, MA 02115

Lynford Graham
Director of Audit Policy
BDO Seidman LLP
330 Madison Avenue
10th Floor
New York, NY 10017

Acknowledgments: The authors thank the public accounting firms providing advice and the time
of their personnel in support of this research. We also appreciate helpful comments from Kathy
Hurtt, Ganesh Krishnamoorthy, and Margarita Lenk.
INFORMATION SYSTEMS RISK FACTORS, RISK ASSESSMENTS, AND AUDIT
PLANNING DECISIONS

ABSTRACT

In this study, we examine systems risk factors identified by external auditors for a sample

of their actual audit clients. Specifically, we study two important areas of information systems

risk: the risk of breaches in system security and the risk that the information provided by the

system is inadequate. To perform the study, we examine the nature of systems risk factors

identified, and relate those risk factors to the auditors’ systems risk assessments and audit test

plans. We find that systems risk factors are identified for a high proportion of clients, most

frequently including issues of management style and competence, maintaining system currency,

and adequacy of documentation. Risk assessments are significantly associated with the number

of risk factors identified within each area, and more risk factors are identified in clients with

higher business risk. To address systems risk factors, auditors most often choose review/inquiry

procedures. To a lesser extent, we observe the design of some tests of controls to address EDP

security risk issues and the design of some substantive testing procedures to address management

information risk issues. Audit test planning is statistically associated with system-specific types

of risk factors in the EDP security risk area, and with both company-level and system-specific

risk factors in the management information risk area.

Key words: Information systems, Control risk, Systems risk, Audit risk, Audit planning
INFORMATION SYSTEMS RISK FACTORS, RISK ASSESSMENTS, AND AUDIT
PLANNING DECISIONS

INTRODUCTION

The purpose of this study is to examine external auditors’ perspectives on information

systems risk in their actual audit clients. During the past two decades, companies have invested

considerable resources in information technology (IT). These organizations rely on IT to collect,

maintain and communicate data to support achievement of their objectives, and to measure their

achievements both internally and externally (e.g., Tucker 2001). When developing IT systems,

enterprises tend to focus on the benefits of technology. However, they should also recognize the

need to understand and manage its associated risks. The greater the reliance on IT for managing

and controlling key business operations, the greater the likelihood that inadequate systems will

prevent business goals from being achieved. While research on systems risk is important in

gaining an understanding of how systems can be improved, there is little evidence in the

literature regarding this issue.

This study uses auditors’ evaluations of their clients’ systems risk factors as a source of

data on the difficulties frequently seen in company systems. In considering a client’s IT

environment, auditors need to understand the risk that systems may not perform as planned. This

implies that specific risk factors related to IT functioning will be identified and documented, and

their potential role in the engagement will be analyzed along with other elements of the internal

control environment. The Auditing Standards Board has recently focused attention on the role of

clients’ information systems in controlling business processes in SAS No. 94, “The Effect Of

Information Technology On The Auditor’s Consideration Of Internal Control In A Financial

Statement Audit” (AICPA 2001). This new standard particularly emphasizes that the auditor

should obtain an accurate assessment of the role of a client’s systems in its internal control
2

environment, including both the quality of the systems and the functions performed by them.

Additionally, the Sarbanes-Oxley Act of 2002 (H.R. 3763) will further emphasize internal

controls. The Act will require public companies to file a report on internal controls with their

annual reports. This report will include management’s assessment of the effectiveness of control

design and procedures, and the company’s auditors will report on this assessment.

In addition to its importance for auditors, research on auditors’ consideration of IT risk

should also interest corporate systems professionals. The auditor’s evaluation of a client’s systems

risk is an important source of information on systems quality. The systems literature has incurred

difficulty in evaluating systems effectiveness (Stone 1990; Arnold 1995). One means of gathering

information on systems effectiveness (within a particular company’s system or across companies)

is to employ information from auditors. Because they consider systems risks for a variety of

clients during every audit cycle, auditors are in a unique position to identify weaknesses that

inhibit system performance. Once risk factors have been identified and defined, solutions can be

developed, implemented and integrated into the business process. Thus, research on factors that

influence auditors’ views of their clients’ systems is of potential benefit both to management and

systems developers in improving systems quality. Armed with this knowledge, system developers

will be able to design information systems that will better mitigate the organization’s business

risks. Additionally, increased awareness of risk factors will provide company managers and

system developers with greater opportunities to devise and implement appropriate and adequate

controls during the development process, which should help minimize the cost of implementing

controls (Lainhart 2001).


3

While systems risk analysis is clearly of interest to both auditors and systems

professionals, we are unaware of research providing descriptive evidence on the nature of risks

commonly present in business systems, and the implications of such risks for audit testing. This

study addresses this research gap by examining two specific areas of information systems risk:

management information quality and EDP security.1 These specific risk areas encompass the

physical and electronic integrity of client data systems, and the appropriateness of the

information contained in those systems, respectively. Within each risk area, we document the

frequency of specific system and client characteristics that auditors identify and consider when

planning for a sample of actual engagements. Further, we assess the association of specific types

of risk factors with auditors’ risk assessments, and with decisions to plan various types of audit

tests. To conduct the study, we asked professional auditors to identify, for one of their actual

clients, specific client conditions and issues associated with system security and information

quality. The auditors also assessed the level of risk within each area, and noted the tests they

would perform as part of the overall audit plan.

Our results show that for a very high proportion of clients, auditors identify at least one

risk factor in the EDP security and management information areas, even though this group of

clients is assessed as being of relatively low overall risk. The most frequent types of risk factors

identified in the area of system security are related to management attitude, keeping the system

current, and maintaining adequate documentation. In the management information area,

management attitude factors are also frequently identified, along with management competence,

1
The two aspects of system risk that we consider (management information quality and EDP security) were chosen
from the risk identification and assessment instrument of a Big 5 auditing firm. We chose to focus on two specific
and important areas of systems risk in order to collect detailed data from participants, without making the research
4

and the nature and accuracy of information considered. Regarding audit tests planned in the

system security risk area, we find that system-specific types of risk factors are statistically

associated with planning of both tests of controls and review/inquiry procedures. In the

management information risk area, both company-level and system-specific risk factors are

statistically associated with planning review/inquiry and designing substantive tests. The

implications of these results for auditors and systems professionals are discussed in the paper’s

concluding section.

BACKGROUND AND HYPOTHESIS DEVELOPMENT

Auditing standards have long required that when performing the engagement, auditors

obtain a sufficient understanding of the client’s internal control (e.g., SAS No. 55; AICPA 1988).

Recently, the Auditing Standards Board has issued SAS No. 94 (AICPA 2001), which expands

and clarifies the auditor’s responsibility to understand the role of IT in the client’s business (and

specifically its control environment), and how the client’s use of IT affects the audit strategy.

Particularly, the auditor’s understanding of internal control supports a key audit strategy decision

that must be made in every engagement: to what extent to rely on the client’s controls. If the

client’s controls cannot be trusted, or if it would be inefficient to determine whether they can be

trusted, the auditor may bypass the client’s control system and apply direct substantive tests to

the account balances.2 If the auditor determines through documentation and testing that the

task unduly onerous. The two specific risk areas were selected by a focus group of experienced auditors; see the
Methods section for further details.
2
While U.S. auditing standards previously permitted auditors to “audit around” the client’s controls, this option
may not continue. As part of its audit risk project, the U.S. Auditing Standards Board is considering the required
nature and extent of testing of controls for all engagements. However, as noted above, the new Sarbanes-Oxley Act
requires that auditors of public companies evaluate systems effectiveness in order to report on management’s
assertions about its systems, prompting near-term consideration of this issue at the professional level.
5

controls are well designed and operating effectively, they may be relied upon and the extent of

direct substantive testing of the accounts correspondingly reduced.

A crucial component of the reliance decision is the evaluation of risk that the client’s

internal controls will not perform as intended. This paper examines two areas of that risk

evaluation, management information quality and EDP security. First, if the output of the client’s

systems does not provide suitable and accurate information to support management’s decisions,

then the system may not “signal” when problems occur. If so, the monitoring function of controls

is not achieved, and the risk of control failure is increased. Second, to the extent that these

controls are computerized, lack of EDP security could negate any value in the effective design of

the controls. In effect, security breaches can allow deviation from established procedures

programmed into the system. Thus, in both risk areas, audit effectiveness will be increased if

related risk factors are identified, and the level of risk assessed and tested, as part of the overall

evaluation of the client’s internal controls. Despite the importance of this task, prior research

provides no direct evidence about the relationship of specific client characteristics to risk

assessments and testing decisions in either area. In the following section, we review related

studies and develop our hypotheses.

Research Hypotheses

Our first hypothesis relates to the relationship between risk factors and risk assessments

within each risk area. Auditing standards note that auditors should respond to engagement risks

by increasing their risk assessments and altering the nature, timing, and extent of audit

procedures (e.g., SAS No. 47, AICPA 1983; SAS No. 82, AICPA 1997). Prior research on the

relationship of risk factors (specific client facts or issues) to risk assessments (summary
6

judgments of the level of risk) using behavioral designs generally finds that auditors’ overall

inherent/control risk assessments are related to differences in risk factors (e.g., Bedard and

Wright 1994; Davis 1996). Further, auditors’ fraud risk assessments are also associated with the

presence of risk factors that may indicate the presence of fraud (e.g., Zimbelman 1997). Few

archival studies of audit risk identify specific risk factors, but instead tend to focus on risk

assessments because these can be more easily compared across clients. An exception is Mock

and Turner (2002), who find that auditors’ fraud risk assessments are associated with the number

of fraud risk factors they identify in their clients. Based on prior research and professional

standards, we hypothesize that:

H1. Within each systems risk area, the level of risk assessed is positively associated with
the risk factors identified.

Our second set of hypotheses deals with the relationship between the two areas of

systems risk considered in this study, the security of EDP systems and the quality of information

they produce. While prior studies do not directly address this issue, it is likely that these types of

risk are correlated, due to common causal factors. For instance, poor system design can be due to

inadequate knowledge, insufficient effort/resources, or a combination of both. The level of

knowledge and effort applied to system development is likely to affect both the mechanisms in

place for protecting data and the nature of the data that the system generates. In contrast, a well-

controlled business will provide adequate resources to secure and update systems in place, and

ensure that the information needs of management are being met. In support of this argument,

Haskins (1987) finds correlations in auditors’ perceptions of attributes related to client control

environments. Likewise, Waller (1993) finds similarity in auditors’ risk assessments across

assertions for a given account. These arguments lead to the following hypotheses:
7

H2a. The number of risk factors identified in the two systems risk areas (EDP and
Management Information) is positively associated.

H2b. The level of risk assessed in the two systems risk areas (EDP and Management
Information) is positively associated.

The third research hypothesis considers the role of system risk factors in planning audit

tests. As noted previously, auditing standards indicate that auditors should adjust the audit plan

to reflect client risk factors. Regarding audit test planning decisions, prior research using

behavioral methods often finds evidence of risk responsiveness in audit planning, consistent with

auditing standards, (e.g., Libby et al. 1985; Houston et al. 1999; Asare and Wright 2002).

However, results of archival studies on auditors’ risk responsiveness tend to vary, with some

detecting little relationship between risk and audit planning decisions (e.g., Bedard 1989; Mock

and Wright 1993, 1999) and others finding a relationship (e.g., Davis et al. 1993; Hackenbrack

and Knechel 1997; Johnstone and Bedard 2001; Mock and Turner 2002). While the reasons for

the mixed findings are not clear, there is sufficient literature supporting an effect of risk on audit

planning to support a directional expectation for an effect of risk on audit planning. However,

this relationship has not been tested in the systems context specifically, motivating our extension

of the literature. We propose that:

H3. Within each systems risk area, identification of risk factors will results in increases in
audit tests planned.

This research hypothesis anticipates a positive correlation of risk factors and audit tests.

Detection of such a relationship is important, but it is even more informative to address which

types of risk factors are associated with particular types of audit tests. In other words, how do

auditors address particular types of risk in the audit plan? To address this issue, we provide
8

supplemental analysis relating the nature of audit tests to categories of risk factors within each

risk area.

METHODS

Participant selection and characteristics

Data for this study were collected from 46 auditors serving on engagement teams for 23

clients of two Big 5 accounting firms, in the presence of one of the authors.3 Selection and

scheduling of participants were accomplished with the assistance of a contact person at each

firm, who was only aware that the study concerned audit planning. Participants responded to a

questionnaire (described in the following section) about characteristics of one of their actual

clients, which was selected in advance of the research session. Selection criteria were guided by

the need for participants to have well-developed knowledge of client conditions. These criteria

included the following: each participant must have completed at least one planning cycle for the

client and be at least a senior on the current audit. Participants include 19 seniors or supervising

seniors on the identified audit, 18 managers, seven senior managers and two partners. Their

average length of audit experience is 6.9 years.

Research task, procedures and variables

The research task was developed with the assistance of a focus group of partners and

managers from a participating firm. The focus group identified specific statements from the

3
This paper further analyzes data collected for another study, which studies the design of decision aids for audit risk
identification, risk assessment and audit test planning. The prior study considers four specific risk areas, including
the two areas of systems risk considered here. The primary goal of that study is to compare effects of decision aid
design (i.e., a “positive” or “negative” decision aid orientation) on risk identification. Thus, we collected data from
two engagement team members for each of 23 clients, who developed their responses independently. Because the
goal of the current study is to present a detailed analysis of the type of systems risk factors identified, and their
relationship to audit test planning, we combine data across orientations for most of the analysis. A dichotomous
variable for decision orientation is not significant in the models presented in this paper. However, the paired design
9

firm’s decision aid for risk identification and assessment. In ranking risk areas on

appropriateness for the study, the focus groups considered such factors as the importance of the

risk area in audit planning, its application to a broad range of clients, and its potential for

differentiating more from less risky clients. Among these issues are the two systems risk areas

considered in this study: (1) whether top management sufficiently oversees and addresses the

risks related to data security and EDP system security for critical information systems; and (2)

whether there are weaknesses in the relevance, completeness, timeliness and reliability of

management information used by the company to monitor enterprise activity. These specific risk

areas encompass the physical and electronic integrity of client data systems, and the

appropriateness of the information contained in those systems, respectively.

Within each area, participants first noted a summary assessment of risk on a seven-point

Likert scale. Next, they were asked to list “all specific risk conditions or issues involved with this

client that you can recall, which you believe should be considered during the development of the

audit program or are worthy of note by engagement personnel as the audit proceeds.” The

number of risk factors identified was determined from the client issues listed. Each issue was

independently coded into the following types: Negative (a fact about the client that would lead to

an increase in risk), Positive (a fact about the client that would lead to a reduction of risk),

Neutral (a fact about the client that would not affect the level of risk).4 The negative issues are

termed Systems Risk Factors Identified. The risk factors are further divided into types (i.e.,

external factors, company factors and system-specific factors) in order to categorize the nature of

allows additional insight, and so we also present supplemental analysis of differences in responses between pair
members.
10

factors that auditors considered important in planning engagements for these clients, and their

relationship to audit planning decisions.

RESULTS

Descriptive Statistics

Table 1 describes the types of systems risk factors identified by participants in the EDP

security and management information areas. The table shows that the 46 participants (23 clients)

identified 65 risk factors as relevant to EDP security, with more system-specific factors (47)

identified than company factors (18) or external factors (none). Details of the specific factors

identified by participants is given in the Appendix. As the Appendix shows, the most frequent

factors are management style/attitude (8), keeping up with change in the systems area (11) and

concerns about system documentation (24). Table 1 shows that when duplicate factors (i.e., those

identified by both members of an engagement team) are eliminated, we find 57 unique factors

identified; thus only eight factors are common. Of the 23 clients, 17 (75 percent) had some EDP

security issue that at least one member of the engagement team considered worthy of attention

when developing the audit program.

Insert Table 1 About Here

For the management information risk area, similar numbers of company (34) and system-

specific (32) factors are identified, and four external factors are also cited. The Appendix shows

that the most frequent concerns relate to management ability/competence (8 factors),

management style/attitude (13 factors), and the nature of information produced (13). Table 1

shows that in this risk area, 68 risk factors are unique, and only two are common. Issues

4
Agreement on the codings was 95.7 percent. Differences in coding were reconciled through analysis of the
11

regarding management information are even more frequent than EDP security, with 20 of 23 (87

percent) identified as having an issue worthy of audit attention.

One aspect of our descriptive results stands out as being particularly unexpected and

interesting: the relatively low proportion of risk factors identified by both individuals on a given

engagement team.5 This apparent lack of consensus is curious, and deserves further research. A

factor that may contribute to the level of discrepancy is our fairly high standard for determining

common versus distinct factors. Each member of a pair had to identify precisely the same fact in

order for it to be determined common to both factor lists. For example, for a management

integrity/competence issue to be coded as common, the same client employee had to be

identified by both parties. While the coding standards may have magnified apparent differences,

the level of distinct factors is sufficiently high to warrant further investigation. The concluding

section of the paper discusses possible reasons for this effect, and its implications.

Table 2 describes levels of risk assessed in each systems risk area, and the nature of audit

tests planned to address those risks. The mean level of EDP security risk assessed by participants

is 1.78 on a scale of 0 to 6, and the mean risk assessment in the management information area is

1.70.6 For both risk areas, the most frequent type of test is review/inquiry, in which the auditor

either reviews client-prepared work, or seeks responses to questions from client personnel (mean

= 0.91 for EDP security, 1.09 for management information). For EDP security, the next most

participants’ responses regarding how the audit plan should address the identified issue, and the audiotapes of their
discussions during the debriefing sessions.
5
Although the nature of risk factors identified differed between participants within the pairs, their risk assessments
did not. For EDP security, the mean risk assessment is 1.92 (positive orientation) and 1.64 (negative orientation).
For management information, the risk assessment is 1.59 (positive) and 1.80 (negative). Neither difference is
statistically significant.
6
The mean level of risk in both areas is relatively low, presumably reflecting the client portfolio of (then) Big 5
firms. Despite the low risk assessments, auditors identified risk factors of concern for almost all clients.
12

frequent type is tests of controls (i.e., assessment of the control environment, testing controls, or

calling upon the firm’s EDP specialists) (mean = 0.43). Very few substantive tests are planned in

this risk area (0.07). For the management information area, relatively more substantive tests

(0.30) than control tests (0.02) are planned.

Insert Table 2 About Here

Results of Hypothesis Testing

H1 proposes that within each risk area, auditors’ risk assessments are associated with the

number of risk factors identified. Table 3 shows support for this hypothesis in both risk areas.

For EDP security (Panel A), risk assessments are correlated with identification of company level

factors 0.205 (p = 0.086), and with system level factors 0.208 (p = 0.083), although significance

levels are marginal. Panel B relates risk factors and risk assessments in the management

information area. Management information risk assessments are correlated with identification of

company level factors at 0.383 (p = 0.004), and with system level factors 0.461 (0.001). Taken

together, these results imply that auditors’ risk assessments related to client systems are

calibrated to the number of risk factors considered while making the assessment. However, the

relationship is stronger for management information than for EDP security. This difference may

be due to the frequent practice of calling on EDP specialists to assist with engagements in which

system security risks have been identified. Thus, general auditors have less experience in the task

of assessing the implications of EDP security risk factors, relative to management information

concerns, as the latter are normally considered in assessing the client’s control environment.

In addition to capturing systems risk assessments, we also asked participants to assess the

client’s overall business risk, on Likert scales representing dimensions of profitability, growth,
13

liquidity, and going concern risk. Using an overall business risk measure that is the average of

the four dimensions, we find that client business risk is positively correlated with EDP security

risk (Pearson correlation = 0.280, p = 0.030), but not with management information risk (0.171,

p = 0.128). Further analysis shows that clients with at least one control environment risk factor

have higher business risk (t = 1.386, p = 0.082), suggesting that factors such as management

integrity may be underlying both the systems and financial health problems.

Insert Table 3 About Here

H2a and H2b propose that the extent of risk factor identification and the level of risk

assessed, respectively, will be correlated between systems risk areas. Table 3 Panel C shows that

the number of risk factors identified for EDP security is positively correlated with the number

identified for management information, supporting H2a (correlation = 0.455, p = 0.001). Panel C

shows that H2b is also supported, as the risk assessments for EDP security and management

information are also positively correlated (0.340, p = 0.010). These results imply that the two

risk areas studied here are related, but distinct. Clients with higher risk in one of the areas tend to

also have higher risk in the other, although the association is clearly not perfect.7 These results

imply some common, and some area-specific, causes for the observed system weaknesses.

H3 proposes that within each risk area, audit test planning is related to the risk factors

identified. We test this hypothesis using multivariate ANOVA for each risk area, with the

multiple dependent variables being the number of control tests, substantive tests, and

review/inquiry tests planned. The models include main effects for Company Factors (i.e.,

identification of at least one company risk factor) and System-specific Factors (identification of
14

at least one system-specific factor), and use the area’s risk assessment as a covariate.8 Table 4

presents results of the MANOVA model for the EDP security risk area. Panel A shows that

System-specific Factors significantly affect the planning of the three types of audit tests taken as

a whole (F = 12.439, p = 0.000), but Company Factors and EDP security risk assessments do

not. Panel B of Table 3 shows results of the individual ANOVA models for control and

substantive tests.

Insert Table 4 About Here

Model 1 of Panel B presents results of the ANOVA model explaining planning of control

Tests (F = 4.040, p = 0.013; adjusted R2 = 0.169 percent). System-specific factors are significant

in this model (F = 8.884, p = 0.005), but Company factors and EDP security risk assessments are

not significant. Model 2 presents results of the model explaining planning of review/inquiry

tests, which shows a similar pattern but has greater explanatory power (F = 9.625, p = 0.000;

adjusted R2 = 0.365 percent). The identification of System-specific factors is highly significant (F

= 24.744, p = 0.000), while Company Factors and EDP risk assessments are not. Taken together,

these results imply that EDP security issues are addressed through control tests and

review/inquiry, and that the important factors driving planning of both types of test are system-

specific. Thus, while a number of company-level factors (e.g., management integrity and

competence) were identified, they do not seem to significantly affect the audit plan. Further, after

controlling for effects of risk factor identification, we find that the risk assessment is not

associated with planning of either type of test.

7
We also tested whether the nature of risk factors identified and the level of risk assessed are affected by the
auditor’s years of experience, rank or Firm affiliation. None of these variables is significantly associated with
identification of risk factors or risk assessments for either EDP security or management information.
8
External factors are not included as an independent variable in the models, as so few of them were cited.
15

Table 5 presents results of estimating the MANOVA model explaining the nature of tests

planned to address risk in the management information area. Panel A shows that both Company

Factors and System-specific Factors are significant in explaining audit test planning (F = 10.652,

p = 0.000; and F = 4.189, p = 0.011, respectively), while the management information risk

assessment is not significant. Two of the three individual ANOVA models are significant, those

for review/inquiry tests (F = 10.668, p = 0.000; adjusted R2 = 0.392 percent) and substantive

tests (F = 5.130, p = 0.004; adjusted R2 = 0.216). Panel B presents results of those models.

Model 1 shows that both Company Factors (F = 10.592, p = 0.002) and System-specific Factors

(F = 9.720, p = 0.003) significantly affect planning of review/inquiry tests. Model 2 shows that

only Company Factors seem to affect planning of substantive tests (F = 11.341, p = 0.002).

These results imply that management information risk is primarily addressed through

review/inquiry and substantive tests. Risk factors at the company level (e.g., management

attitude and competence) are addressed both through review/inquiry and substantive tests. Risk

factors at the system level are addressed through review/inquiry.

Insert Table 5 About Here

DISCUSSION

This paper’s purpose is to provide descriptive evidence on the number and nature of

systems risk factors that auditors identify for actual clients, and the relationship of those risk

factors to risk assessments and audit testing. The study is important because no empirical

evidence exists regarding these issues. This study’s research design is unique in that it asks

auditors to provide judgments and decisions relative to their own current clients. As such, the

design is similar to behavioral research studies that capture responses to current conditions (as
16

opposed to surveys that ask for responses generalizing past experiences), but is similar to

archival research in that it captures data on actual clients (as opposed to hypothetical cases.)

This study provides a number of interesting findings. Our first set of findings concerns

the nature of risk factors identified, and their association with increased system risk assessments.

In terms of numbers of risk factors identified, system factors predominate in the EDP security

area, while similar numbers of company factors and system factors are identified for the

management information area. Apparently, this group of auditors primarily considers the features

of the system for security risk, but places more emphasis on management style and competence

when considering issues regarding information quality. We also find that auditors’ risk

assessments in both EDP security and management information areas increase when system or

company-level factors are identified, but the association is stronger in the management

information area. This suggests that risk judgments in the management information area of

systems risk are better calibrated. One explanation for this finding is that auditors may have more

experience in making unassisted assessments of the level of risk associated with problems

concerning the nature of a client’s management information, relative to assessing the

implications of EDP security issues, as the latter are often addressed with the help of a firm’s

EDP specialists.

Another finding derived from categorizing risk factors is the observation that relatively

few of the specific factors identified by the two auditors from each engagement team are

common. This result implies that participants were thinking of different specific client conditions

when planning their audit programs. While there may be valid reasons for the discrepancy (e.g.,

auditors in different engagement roles perform different audit functions, and interact with
17

different levels of client personnel), this issue is deserving of further research. This finding

underscores the importance of team communication regarding audit planning judgments and

decisions.

Our second set of findings concerns the relationship between the specific risk areas we

examined. As measured by both risk factors and risk assessments, neither the EDP security nor

the management information area dominates as being of greater concern across this panel of

audit clients. Relatedly, there is a significant correlation in risk between the areas, implying that

systems with security problems also produce less useful management information. We

previously noted possible reasons for this commonality, most importantly the quality of

personnel and the level of effort applied to developing and maintaining a business system. We

also find that clients with higher business risk (i.e., relatively poor financial health) tend to have

higher EDP security risk.

Our third set of findings relates to the tests auditors perform in the systems risk areas

studied. In the EDP security risk area, review/inquiry tests predominate as the initial response to

identified risks, followed by tests of controls. Tests of controls are planned by only half of those

identifying an EDP Security risk factor. For the management information area, review/inquiry

tests are also most frequent, with substantive tests performed much less frequently (by only one-

third of those identifying a management information risk factor). These results suggest that in

practice, investigating systems risk through substantive or control tests is relatively rare

compared with review/inquiry. However, it is important to note that our study addresses only the

initial design of audit procedures, and thus may underestimate the frequency of substantive and

control tests. Auditors may first investigate an identified issue through review of client work or
18

client inquiry, but follow up using other procedures based on the evidence gained in the initial

investigation.

In addition to investigating the nature of audit tests planned, we also investigated which

particular types of risk factors are associated with test planning. Several interesting results arise

from this investigation. First, we find that when the number of risk factors is controlled for, the

level of the risk assessment provides no incremental explanatory power regarding audit test

planning. Thus, the risk factors, rather than the risk assessment, seem to drive audit test

selection.9 Second, within the EDP security risk area, system-specific factors are clearly the only

type of factor that drives testing decisions. Primarily, these are issues of system upkeep and

documentation. Auditors seem to view system upkeep and documentation problems as indicative

of trouble, and want to apply review, inquiry and control tests to investigate the impact these

problems will have on the ability of the client’s systems to perform appropriately. Third, within

the management information risk area, company-level factors (such as management ability/style

and organizational structure) are associated with planning of both review/inquiry and substantive

tests. System-specific factors play less of a role in this area, being significantly associated only

with review/inquiry tests. These results suggest that when viewing the risks associated with the

quality of information provided by a system, auditors tend to focus on the characteristics of the

people and the organization, rather than the specifics of the system, when planning the audit

program.

In sum, our results provide a preliminary view of the relationship of specific risk factors

to testing decisions. This study’s findings imply that in the post-SAS No. 94 world, in which the
19

effect of client IT characteristics on audit strategy must be considered, audit firms should adopt

procedures requiring documentation of the effect of specific client fact patterns on audit planning

decisions. Documentation of this information will provide essential information for assessing the

effectiveness and efficiency of various possible responses to identified systems risk factors.

Our results also provide some guidance to management, who under the Sarbanes-Oxley

Act (2002) must certify that it has evaluated the effectiveness of the internal control system.

Additionally, this research benefits corporate audit committees, who under the Sarbanes-Oxley

Act have to oversee the risk evaluation performance of public accounting firms. Understanding

the risk factors that are important to auditors will help these committees evaluate the

effectiveness of the audit being performed by their auditors.

Finally, this study contributes to the literature on systems effectiveness. Reviews of the

systems literature by Stone (1990) and Arnold (1995) suggest advantages and limitations of

traditional approaches (analytical and interpretive) to systems evaluation. This paper contributes

an alternative means of addressing systems quality, demonstrating the usefulness of auditors’

evaluations of their clients’ systems risk as a source of data for future research in this area.

9
While it is possible that risk assessments may be more useful in explaining the extent of tests planned, rather than
their nature, this study does not address the extent of testing.
20

References

American Institute of Certified Public Accountants. 1983. Statement on Auditing Standards No.
47: Audit Risk and Materiality in Conducting an Audit. New York, NY: AICPA.

____. 1988. Statement on Auditing Standards No. 55: Consideration of the Internal Control
Structure in a Financial Statement Audit. New York, NY: AICPA.

____. 1995. Statement on Auditing Standards No. 78: Consideration of Internal Control in a
Financial Statement Audit: An Amendment to Statement on Auditing Standards No. 55.
New York, NY: AICPA.

_____. 1997. Statement on Auditing Standards No. 82: Consideration of Fraud in a Financial
Statement Audit. New York, NY: AICPA.

_____. 2001. Statement on Auditing Standards No. 94: The effect of information technology on
the Auditor’s Consideration of Internal Control in a Financial Statement Audit. New
York, NY: AICPA.

Arnold, V. 1995. Discussion of an experimental evaluation of measurements of information


system effectiveness. Journal of Information Systems 9 (Fall): 85-91.

Asare, S.K., and A. Wright. 2002. The impact of fraud risk assessments and a standard audit
program on fraud detection plans. Working paper, University of Florida (April).

Bedard, J. 1989. An archival investigation of audit program planning. Auditing: A Journal of


Practice and Theory 8 (Fall): 57-71.

Bedard, J.C., & Wright, A. (1994), The functionality of decision heuristics: Reliance on prior
audit adjustments in evidential planning. Behavioral Research in Accounting, 7,
Supplement, pp. 62-89.

Davis, J.T. 1996. Experience and Auditor’s Selection of Relevant Information for Preliminary
Control Risk Assessments. Auditing: A Journal of Practice & Theory (Spring): 16-37.

Davis, L.R., D.N. Ricchuite, and G. Trompeter. 1993. Audit effort, audit fees, and the provision
of nonaudit services to audit clients. The Accounting Review 68 (January): 135-150.

Hackenbrack, K. and R. Knechel. 1997. Resource allocation decisions in audit engagements.


Contemporary Accounting Research 14 (Fall): 481-499.

Haskins, M.1987. Client control environments: An examination of auditors’ perceptions. The


Accounting Review 62 (July): 542-563.
21

Houston, R., M. Peters, and J. Pratt. 1999. The audit risk model, business risk and audit planning
decisions. The Accounting Review 74:3 (July): 281-298.

Johnstone, K., and J. Bedard. 2001. Engagement planning, bid pricing, and client response: The
effects of risk and market context in initial attest engagements. The Accounting Review
76 (April).

Lainhart, IV, J. W. 2001. An IT assurance framework for the future. Ohio CPA Journal 60
(January-March): 19-23.

Libby, R., J. Artman, and J. Willingham. 1985. Process susceptibility, control risk, and audit
planning. The Accounting Review 60 (April): 212-230.

Mock, T.J., and J.L. Turner. 2002. An archival study of audit fraud risk assessments made under
SAS No. 82. Working paper, University of Southern California.

Mock, T., and Wright, A. 1993. An exploratory study of auditor evidential planning judgments.
Auditing: A Journal of Practice & Theory (Fall): 39-61.

Mock, T., and Wright, A. 1999. Are audit program plans risk-adjusted? Auditing: A Journal of
Practice & Theory 12 (Spring): 55-74.

Stone, D. 1990. Assumptions and values in the practice of information systems evaluation.
Journal of Information Systems (Fall): 1-17.

Tucker, G. 2001. IT and the audit. Journal of Accountancy (September): 41-43.

Waller, W.S. 1993. Auditors' assessments of inherent and control risk in field settings. The
Accounting Review 68 (October): 783-803.

Zimbelman, M.F. (1997), The effects of SAS 82 on auditors’ attention to fraud risk factors and
audit planning. Journal of Accounting Research, 35, Supplement, pp. 75-97.
22

TABLE 1
Descriptive Statistics on Types of Risk Factors Identified

1. Raw number of 2. Number of 3. Number (percent) of clients


risk factors unique risk factors with at least one risk factor,
Risk Factor Type identified identified by type

EDP Security

External factors 0 0 0

Company factors 18 17 11 (47.8 %)

System-specific factors 47 40 16 (69.6 %)

Total risk factors 65 57

Number of clients with at least 17 (74.9 %)


one EDP Security risk factor

Management Information

External factors 4 4 0

Company factors 34 32 17 (73.9 %)

System-specific factors 32 32 18 (78.3 %)

Total risk factors 70 68

Number of clients with at least 20 (87.0 %)


one EDP Security risk factor

This table describes the nature of systems factors identified by participants (46 auditors of 23 engagement
teams). Column 1 shows the total number of risk factors identified by the 46 participants. Column 2
presents the number of unique risk factors identified, after removing duplicate factors (i.e., those
identified by both members of an engagement team). Column 3 shows the number (percent) of the 23
clients for which at least one risk factor of a given type was identified.
23

TABLE 2
Descriptive Statistics on Risk Assessments and the Nature of Audit Tests Planned

Panel A. EDP Security

Standard
Mean Deviation

Review/inquiry tests 0.91 1.19


Control tests 0.43 0.81
Substantive tests 0.07 0.33

Total 1.41 1.68

Risk assessment 1.78 1.03

Panel B. Management Information


Standard
Mean Deviation

Review/inquiry tests 1.09 1.27


Control tests 0.02 0.15
Substantive tests 0.30 0.66

Total 1.43 1.52

Risk assessment 1.70 1.10

The table presents the means and standard deviations of risk assessments, and types of audit tests planned,
for each risk area.
24

Table 3
Correlations Among Risk Factors, Risk Assessments and Audit Tests

Panel A. Within EDP Security


System Risk EDP Security Review/ Control
Factors Risk Assessment Inquiry Tests Tests

Company Risk Factors 0.341*** 0.205* 0.189 0.389***

System Risk Factors 0.208* 0.659*** 0.368***

Panel B. Within Management Information


Management
System Risk Information Review/ Substantive
Factors Risk Assessment Inquiry Tests Tests

Company Risk Factors 0.146 0.383*** 0.531*** 0.392***

System Risk Factors 0.461*** 0.528*** 0.135

Panel C. Between Risk Areas


Management Management
Information: Risk Information:
Factors Identified Risk Assessment

EDP Security:
Risk Factors Identified 0.455***

EDP Security:
Risk Assessment 0.340**

Panels A and B present Pearson correlations of risk factors with risk assessments and audit tests. Risk
factors are measured as the number of factors identified within each category, and audit tests are
measured as the number of each type of test planned. Panel C shows correlations between risk areas in
terms of risk factors identified and risk assessments. *** denotes significance at p < 0.01, ** denotes
significance at p < 0.05, and * denotes significance at p < 0.10, one-tailed.
25

Table 4
EDP Security: Relationship of Risk Factors, Risk Assessments and Nature of Audit Tests

Panel A. Multivariate ANOVA


Multivariate Tests
Effect Wilks’ Lambda F Significance
Intercept 0.809 3.149 0.035
Company factors 0.982 0.246 0.864
System-specific factors 0.517 12.439 0.000
EDP security risk 0.970 0.419 0.741
assessment

Panel B. Tests of Effects (significant models only)


Model 1: Control tests
Source Type III Sum df Mean F Probability
of Squares Square

Intercept 2.781 1 2.781 5.137 0.029


Company factors 0.267 1 0.267 0.492 0.487
System-specific factors 4.811 1 4.811 8.884 0.005
EDP security risk 0.514 1 0.514 0.949 0.336
assessment
Corrected Model 6.562 3 2.187 4.040 0.013
Adjusted R Squared 0.169

Model 2: Review/inquiry tests


Source Type III Sum df Mean F Probability
of Squares Square

Intercept 2.691 1 2.691 2.998 0.091


Company factors 0.001 1 0.001 0.001 0.971
System-specific factors 22.213 1 22.213 24.744 0.000
EDP security risk 0.105 1 0.105 0.116 0.735
assessment
Corrected Model 25.949 3 8.650 9.635 0.000
Adjusted R Squared 0.365

This table presents results of estimating MANOVA models of audit tests planned as a function of the type
of risk factor identified, while controlling for the level of risk assessed. To allow estimation of the
multivariate ANOVA, risk factors are measured as identification of at least one factor within a category.
Audit tests are measured as the number of each type of test planned.
26

Table 5
Management Information: Relationship of Risk Factors, Risk Assessments and Nature of
Audit Tests

Panel A. Multivariate ANOVA


Multivariate Tests
Effect Wilks’ Lambda F Significance
Intercept 0.674 6.438 0.001
Company factors 0.556 10.652 0.000
System-specific factors 0.761 4.189 0.011
Management information 0.957 0.594 0.623
risk assessment

Panel B. Tests of Effects (significant models only)


Model 1: Review/inquiry tests
Source Type III Sum df Mean F Probability
of Squares Square

Intercept 11.316 1 11.316 11.558 0.001


Company factors 10.370 1 10.370 10.592 0.002
System-specific factors 9.517 1 9.517 9.720 0.003
Management information 0.792 1 0.792 0.809 0.374
risk assessment
Corrected Model 31.336 3 10.445 10.668 0.000
Adjusted R Squared 0.392

Model 2: Substantive tests


Source Type III Sum df Mean F Probability
of Squares Square

Intercept 1.105 1 1.105 3.213 0.080


Company factors 3.901 1 3.901 11.341 0.002
System-specific factors 0.188 1 0.188 0.547 0.464
Management information 0.118 1 0.118 0.344 0.561
risk assessment
Corrected Model 5.294 3 1.765 5.130 0.004
Adjusted R Squared 0.216

This table presents results of estimating MANOVA models of audit tests planned as a function of the type
of risk factor identified, while controlling for the level of risk assessed. To allow estimation of the
multivariate ANOVA, risk factors are measured as identification of at least one factor within a category.
Audit tests are measured as the number of each type of test planned.
27

Appendix
Risk Factors Identified, by Type

Management
EDP Security Information
1. External factors
a) Regulatory environment 0 1
b) Industry/Competition 0 3
Subtotal 0 4

2. Company factors
a) Control environment 10 25
Management ability / competence 2 8
Incentives / performance pressures 0 4
Management style/attitude 8 13
b) Integration with data from subsidiaries 2 5
c) Reporting structure of IS function within
the organization 3 1
d) Sufficiency of budget 3 3
Subtotal 18 34

3. System factors
a) Keeping up with the pace of change in the
systems area 11 4
b) Competency of staff 4 0
c) Sufficiency of staff (i.e., numbers) 2 1
d) Supervision of staff 0 1
e) Turnover of staff 4 0
f) Controls over system security 4 0
g) Documentation 24 0
h) Nature of information produced 1 13
i) Accuracy of information produced 1 7
j) Appropriate distribution of reports 0 1
k) Timing of reports 0 3
l) Interface with users 0 2
Subtotal 47 32
Overall Total 65 70

Você também pode gostar