Você está na página 1de 5

Psychiatry Research 170 (2009) 262266

Contents lists available at ScienceDirect

Psychiatry Research
j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / p s yc h r e s

A psychometric evaluation of the Personality Assessment Inventory short form clinical scales in an inpatient psychiatric sample
Samuel J. Sinclair a,, Caleb J. Siefert a, Hal S. Shorey b, Daniel Antonius c, Andrew Shiva c,d, Kendra Kehl-Fie a, Mark A. Blais a
a

Massachusetts General Hospital and Harvard Medical School, Psychological Evaluation and Research Laboratory (PEaRL), 1 Bowdoin Square, 7th Floor, Boston, MA, USA Widener University, Institute for Graduate Clinical Psychology, Chester, PA, USA New York University School of Medicine, New York, NY, USA d John Jay College of Criminal Justice, New York, NY, USA
b c

a r t i c l e

i n f o

a b s t r a c t
Few studies have assessed the psychometric properties of the Personality Assessment Inventory short-form (PAI-SF) clinical scales, and none have conducted these evaluations using participants from psychiatric inpatient units. The present study evaluated item-level tests of scaling assumptions of the PAI-SF using a large (N = 503) clinical sample of participants who completed the PAI during their admission to a psychiatric inpatient unit. Internal consistency reliability was high across scales, and tests of item-scale convergence and discrimination generally conrmed hypothesized item groupings. Scale-level correlations supported unique variance being measured by each scale. Finally, agreement between the PAI short- and full-form scales was found to be high. The results are discussed with regards to scale interpretation. 2008 Elsevier Ireland Ltd. All rights reserved.

Article history: Received 7 June 2008 Received in revised form 17 October 2008 Accepted 6 November 2008 Keywords: Personality Assessment Inventory short-form PAI-SF Psychometric

1. Introduction The Personality Assessment Inventory (PAI; Morey, 1991, 2007) is a broadband self-report measure of personality and psychopathology that is increasingly utilized for clinical and research purposes (Holden, 2000; Piotrowski, 2000). The PAI contains a number of features that make it appealing across settings, such as a 4-point item-response format, items written at a 4th6th grade reading level, and nonoverlapping clinical scales. The 11 clinical scales include: Somatic Complaints (SOM), Anxiety (ANX), Anxiety-Related Disorders (ARD), Depression (DEP), Mania (MAN), Paranoia (PAR), Schizophrenia (SCZ), Borderline Features (BOR), Antisocial Features (ANT), Alcohol Problems (ALC) and Drug Problems (DRG). To date, a number of studies across a variety of populations have supported the psychometric adequacy of the PAI clinical scales (e.g., Morey, 1991, 2007; Deisinger, 1995; Boone, 1998; Holden, 2000; Braxton et al., 2007). The PAI may be particularly useful to inpatient clinicians who need to rapidly assess and diagnose patient difculties and plan treatments as it provides valuable clinically relevant information with minimal demands on inpatient staff (Morey, 1996; Boone, 1998; LePage and Mogge, 2001). A PAI Short Form (PAI-SF) has also been developed (Morey, 1991), which produces scores for all 11 of the PAI clinical scales. The

Corresponding author. E-mail address: jsincl@post.harvard.edu (S.J. Sinclair). 0165-1781/$ see front matter 2008 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.psychres.2008.11.001

PAI-SF is composed of the rst 160 items of the PAI. Because the items of the PAI were ordered such that items with the strongest item-scale correlations for their respective scale or subscale are located earlier in the test, item content for almost all of the PAI scales and subscales is assessed by the rst 160 items. This ordering approach was intended to maximize the PAI-SF's internal consistency and stability, as well as to optimize scale discrimination using fewer items (Morey, 1991). The PAI-SF may be preferred in lieu of the full form under a number of different conditions. For example, examiners may wish to use it as a screen for psychopathology to determine if a more detailed assessment is indicated in settings where respondent fatigue is an issue, such as an inpatient setting (LePage and Mogge, 2001; Baity et al., 2007). The PAI-SF may also be preferred in situations where a respondent discontinues the full PAI prior to completing all of the items, or when there is strong evidence to suggest that the respondent did not reliably complete the latter items (Morey and Hopwood, 2004; Siefert et al., 2007). Researchers similarly may prefer using the PAI-SF to reduce the burden on research participants. Optimally, such decisions would involve a careful consideration of the psychometric properties of the PAI-SF scales. However, few studies have examined the psychometric properties of the PAI-SF (Morey, 1991; Frazier et al., 2006). Thus, it is difcult to gauge at this time if the steps taken to maximize the psychometric adequacy of the PAI-SF clinical scales have been effective, especially across different populations. The present study begins to address this gap in the PAI-SF literature by building on

S.J. Sinclair et al. / Psychiatry Research 170 (2009) 262266

263

prior research and focusing on the psychometric properties of the PAI-SF clinical scales in an inpatient setting. To date, we are only aware of two studies that have examined the psychometric properties of the PAI-SF (Morey, 1991; Frazier et al., 2006). Using the census-matched normative sample from his initial PAI validation research, Morey (1991) found that 10 of the 11 clinical scales for the PAI-SF produced coefcient alphas of 0.70. Using a large sample of patients referred for neuropsychological assessment, Frazier et al. (2006) obtained highly similar results. In both studies the Drug Problems (DRG) scale was the only clinical scale to score below this cutoff ( values = 0.63 and 0.60, respectively). Morey (1991) also reported strong agreement between the clinical scales of the PAI-SF and the PAI full form, with correlation coefcients ranging from 0.84 to 0.95. These studies provide some initial data regarding the psychometric properties of the PAI-SF clinical scales. However, continued assessment of the PAI-SF across populations is important in establishing its strengths and weaknesses. Further, with the exception of internal consistency, prior research on the PAI-SF has been conducted at the scale level. As such, additional research is necessary to examine the performance of the individual items, and evaluate whether they are functioning as hypothesized. The present study lls this gap by examining the scaling assumptions of the PAI-SF clinical scales, and specically tests of item-scale convergence and discrimination of the PAI-SF clinical scales at the item level.
2. Method 2.1. Participants This study was approved by the institutional review board (IRB) at the Massachusetts General Hospital, Boston, MA. Study participants were inpatients admitted to a medical-psychiatric unit at a large northeastern hospital between January 1999 and May 2007. All participants completed the full 344-item PAI as part of their initial clinical assessment. In total, 646 PAI protocols were collected, and of these N = 503 (78%) were considered valid for analysis purposes (see discussion below). Prole validity was determined using Morey's (1991, 2007) critical cut-scores for the four full-form validity scales (i.e., Inconsistency [INC] 73; Infrequency [INF] 75; Negative Impression Management [NIM] 92; and Positive Impression Management [PIM] 68). The nal sample was diverse in terms of gender (57% females), age ( M = 42.63; S.D. = 15.24), and education (M = 14.91; S.D. = 2.97). Decisions about prole validity were informed rst by a series of analyses examining the congruence between PAI full-form and PAI-SF validity scales. The PAI-SF contains three (INF, NIM, and PIM) out of four of the validity scales contained in the PAI full form (ICN being unique to the latter). We rst assessed overall agreement for protocols deemed valid and invalid using Morey's (1991, 2007) published cut-scores for the three full-form indexes common to both PAI Forms (i.e., INF 75; NIM 92; and PIM 68). Second, the congruence of each individual validity indicator was assessed between the PAI-SF and the full form. Finally, the number of protocols identied by the PAI full-form INC index, which would not be identied by the PAI-SF, was assessed.

Of the 646 total protocols that were collected, the overall agreement between the PAI-SF and the PAI full form for prole validity was 85.3% (N = 551 protocols). That is, across both forms of the PAI, the three validity indicators consistently identied participants as valid or invalid 85.3% of the time, leaving 95 (14.7%) cases where there was a discrepancy. Across the specic validity indexes, the INF scales were in agreement for 96.6% (N = 625) of the protocols; the NIM scales were in agreement for 95.2% (N = 616) of the protocols; and the PIM scales were in agreement for 97.7% (N = 632) of the protocols. Of the 95 discrepant cases (those determined to be invalid using the PAI full-form validity scales but not so with the PAI-SF), 51 (54%) were identied by the ICN scale, the only validity index not available on the PAI-SF. Given the overall, strong agreement across the PAI-SF and PAI full-form validity scales, the decision was made to dene validity for psychometric analyses based upon the PAI full-form validity indexes. Further, because the purpose of this study was to assess the psychometric properties of the PAI-SF clinical scales relative to the full-form PAI scales, this approach seemed justied and reasonable. However, we acknowledge that by using the full-form prole to determine inclusion in the sample we may have inated our ndings, as a modest degree of discordance (approximately 9%) has been removed from the study sample. A recent analysis of admission data from June 2001June 2002 showed that the unit population was approximately 40% male and 60% female. The racial/ethnic breakdown among the patients was as follows: 72.5%, Caucasian, 6.4%, African American, and 5.4%, Hispanic. The psychiatric diagnoses broke down in the following order: 47.6% depression, 19.5% bipolar disorder, 6.1% schizophrenia/schizoaffective disorder, 2.2% substance abuse disorder, 2.2% eating disorders, 1.3% demented, and 20.1% other (including psychosis not otherwise specied). 2.2. Data analysis The purpose of this study was to assess the scaling assumptions of the PAI-SF. This analysis followed the approach of Ware et al. (1997) and utilized the Multi-Item, MultiTrait Analysis Program (MAP). First, internal consistency reliability was evaluated using Cronbach coefcient alpha. Nunnally and Bernstein (1994) recommend values of 0.70 or greater as indicating adequate reliability. Second, item-scale convergence was evaluated by examining the adjusted item-to-scale correlations (where the items are removed from the scale score to correct for overlap). Ware et al. (1997) and others (e.g., Campbell and Fiske, 1959) suggest correlations of r = 0.40 or greater to support this assumption, although others including Nunnally and Bernstein (1994) suggest a threshold of r = 0.30. Although this assumption is related to that of internal consistency, it is also a means of assessing the convergent validity of each item relative to the aggregate of the remaining items. Where these correlations are low, specic items may represent poor measures of the overarching construct. Third, item-scale discrimination was evaluated by comparing the correlations between an item and its hypothesized scale with the correlations between that item and all other scales, respectively. A Steiger's t test for dependent correlations was used to evaluate whether the differences in these correlations were statistically signicant (Steiger and Lind, 1980; Ware et al., 1997). A scaling success rate was then calculated representing the percentage of item-hypothesized scale correlations that were signicantly greater than the correlations between that item and all other scales, respectively. Fourth, scale-level correlations relative to each scale's internal consistency reliability estimates were then examined to evaluate the extent to which each scale measured unique variance. To the extent that the correlation between two scales is low and the alphas are high, there is evidence for unique variance being measured by each scale. Fifth, oor and ceiling effects were evaluated to determine whether the entire range was being used for each scale. There is no suggested criterion for these statistics, although the extent to which a given population aggregates at either the oor or ceiling

Table 1 Psychometric evaluation of the PAI short-form clinical scales. Scale Range of item-scale correlations Item-convergent validitya SOM ANX ARD DEP MAN PAR SCZ BOR ANT ALC DRG
a b c d e

Item-discriminant validityb 0.010.67 0.010.62 0.010.62 0.000.73 0.000.48 0.010.44 0.020.52 0.010.68 0.020.50 0.070.42 0.000.56

Average item-scale correlation 0.54 0.59 0.49 0.62 0.40 0.47 0.43 0.55 0.43 0.76 0.59

% scaling successc 89.2 86.7 83.3 90.8 85.0 93.3 76.7 77.7 89.2 100.0 100.0

% scaling success (non-signicant)d 98.3 100.0 94.2 99.2 92.5 95.8 97.5 93.8 98.3 100.0 100.0

Internal consistency reliabilitye 0.86 0.88 0.83 0.90 0.77 0.82 0.79 0.85 0.78 0.88 0.82

% oor

% ceiling

0.310.70 0.420.73 0.220.65 0.390.73 0.240.52 0.360.54 0.240.56 0.390.67 0.250.63 0.680.79 0.450.70

2.6 0.8 0.0 2.4 0.6 0.8 2.8 0.2 7.0 61.0 28.2

0.0 0.6 0.4 1.0 0.0 0.0 0.0 0.0 0.0 3.2 1.8

Range of item-to-scale correlations, with respective items removed from total score. Range of correlations between items and all other hypothesized scales. Percent of items correlating signicantly higher (Steiger's t test; P b 0.05) with their hypothesized scale than other scales. Percent of items correlating higher with their hypothesized scale than other scales (statistical signicance not considered). Internal-consistency reliability (Cronbach's coefcient alpha).

264

S.J. Sinclair et al. / Psychiatry Research 170 (2009) 262266

Table 2 PAI short-form reliability coefcients (in parentheses) and inter-scale correlations. Scale 1.SOM 2.ANX 3.ARD 4.DEP 5.MAN 6.PAR 7.SCZ 8.BOR 9.ANT 10.ALC 11.DRG 1 (0.86) 0.57 0.47 0.59 0.20 0.33 0.45 0.50 0.24 0.11 0.18 2 (0.88) 0.78 0.74 0.24 0.39 0.60 0.69 0.25 0.14 0.12 3 4 5 6 7 8 9 10 11

(0.83) 0.63 0.30 0.45 0.59 0.73 0.35 0.21 0.18

(0.90) 0.09 0.36 0.58 0.72 0.24 0.13 0.14

(0.77) 0.42 0.33 0.36 0.50 0.16 0.15

(0.82) 0.54 0.55 0.42 0.22 0.30

(0.79) 0.63 0.45 0.18 0.26

(0.85) 0.50 0.29 0.30

(0.78) 0.35 0.58

(0.88) 0.44

(0.82)

limits the ability of said scale to adequately discriminate both within and across individuals at the poles. Sixth, evaluations of means and standard deviations and intraclass correlations (ICCs) were estimated to evaluate the extent to which the PAI short-form scales accurately reproduced the full scales overall. In group comparisons, ICCs are considered excellent when greater than 0.80, moderate when between 0.60 and 0.80, adequate when between 0.40 and 0.60, and low when less than 0.40 (Shrout and Fleiss, 1979). Finally, the percentages of short-form and full-scale scores elevated above the clinical T-score of 70, as well as the agreement between scales in terms of whether they yielded signicant elevations (yes/no), were calculated.

3. Results and discussion Reliability (Cronbach coefcient alpha) estimates are presented in Table 1 for each of the PAI-SF clinical scales. The range of s was 0.90 (DEP) to 0.77 (MAN), with a median of 0.83 (ARD). In all cases except for SOM ( = 0.86) and ANT ( = 0.78), the internal consistency reliability statistics reported here are greater than those reported by Morey (1991). In most cases, these differences were small, although coefcient alpha was roughly 25% larger ( = 0.82) for DRG in the present study when compared to Morey's (1991) estimate ( = 0.63). Likewise, the reliability estimates presented here mirror those published by Frazier et al. (2006), with the exception of DRG, which again was low in their sample ( = 0.60). All alphas presented in the current study exceeded the 0.70 standard suggested by Nunnally and Bernstein (1994). Item-scale convergence was generally supported, with most items correlating at r = 0.40 or greater with their hypothesized scales after correction for overlap. Table 1 presents both the range of item-scale correlations and average item-scale correlation for each of the clinical scales. Eight out of 11 scales had at least one item fall below this standard, and MAN had the lowest average item-scale correlation of r = 0.40. The range of item-scale correlations attaining this standard across the clinical scales was 100% (ANX, ALC, DRG) to 50% (MAN).

Item-scale divergence was also generally supported, with the majority of items being stronger measures of their hypothesized constructs than other constructs. Table 1 presents scaling success rates (that is, the percentage of item-scale correlations that were signicantly larger than the correlations between that item and all other scales), both with and without consideration to statistical signicance. The range of correlations between all items and those constructs they are not hypothesized to measure are also presented in Table 1. Most items within all scales displayed correlations with their hypothesized scales that were signicantly greater (P b 0.05) than the correlations between those items and other scales, with a range of 100% (ALC and DRG) to 76.7% (SCZ) using Steiger's t test for dependent correlations. Without consideration to statistical signicance, the range was 100% (ALC, DRG, and ANX) to 92.5% (MAN). MAN contained one item (item 27) that displayed signicantly greater correlations with two other constructs (BOR and ARD, respectively) when compared to the correlation between that item and MAN. Likewise, BOR contained one item that evidenced a signicantly greater correlation with the DEP scale when compared to the correlation between that item and the BOR scale. Scale-level correlations relative to each scale's internal consistency reliability are presented in Table 2. With several exceptions, most interscale correlations were low relative to their high reliability levels, indicating unique variance measured by each scale. There were moderate correlations observed between ANX and ARD (r = 0.78), and between ANX and DEP (r = 0.74). Likewise, BOR displayed moderate correlations with ANX (r = 0.69), ARD (r = 0.73), and DEP (r = 0.72). These moderate correlations are not unexpected given the similarity in the underlying constructs and comorbidity between them. Floor and ceiling effects are also presented in Table 1. Six out of 11 scales had no one scoring at the ceiling; the maximum for all scales was ALC, which had 3.2% of participants scoring at the highest possible value. With the exception of two scales (ALC and DRG), similar results

Table 3 Comparisons between PAI short- and full-form scales. Scale SOM ANX ARD DEP MAN PAR SCZ BOR ANT ALC DRG
a b c

PAI short-form mean T-score (S.D.) 64.3 (13.6) 65.6 (14.9) 61.6 (14.5) 71.9 (17.9) 50.0 (11.2) 53.9 (11.5) 57.6 (12.9) 62.9 (12.5) 51.1 (10.6) 54.4 (16.6) 55.6 (15.5)

PAI full-form mean T-score (S.D.) 62.3 (13.3) 64.9 (13.5) 60.3 (14.5) 70.6 (17.1) 49.0 (11.0) 54.0 (11.8) 57.5 (12.7) 62.1 (13.1) 51.7 (10.6) 53.4 (14.7) 55.9 (16.3)

Difference score a 1.6 0.7 1.3 1.3 1.0 0.1 0.1 0.8 0.6 1.0 0.3

Intra-class correlation 0.96 0.96 0.94 0.96 0.93 0.94 0.93 0.96 0.92 0.93 0.97

Percent short-form T-scores 70 36.4 38.2 31.2 55.9 5.6 8.7 19.5 33.8 6.4 16.3 16.1

Percent full-form T-scores 70 31.2 36.6 25.0 54.1 5.0 10.3 17.9 28.8 7.6 13.1 15.3

Percent agreement 90.9 90.5 91.1 93.8 97.8 94.8 91.7 90.7 96.8 96.4 97.2

Difference between short-form and full-form T-scores all fell within the standard error of measure for each respective scale, as reported by Morey (1991, 2007). Intra-class correlation between short-form and full-form T-scores. Percentage of agreement between PAI short and full-form scales elevated 70 (yes/no).

S.J. Sinclair et al. / Psychiatry Research 170 (2009) 262266

265

were found when looking at the oor effects, with few people scoring at the minimum across the remaining nine scales (range = 0% to 7.0%). Slightly greater than one-quarter of participants scored at the oor for DRG (28.2%), and roughly two-thirds reported no symptoms for ALC (61.0%). Although there are no published standards for these statistics, it is useful to know for purposes of better understanding the extent to which the full range of a given scale is utilized in a given population. When large percentages of people score at either the oor or ceiling for any given scale, it may indicate highly skewed or bimodal distributions. Violating the assumption of normal distributions will limit the ability of researchers to use data from these scales in statistical analyses and limit a scale's ability to discriminate individual differences in the middle or on either tail of the distribution. Table 3 presents means and standard deviations, differences in short- and full-form T scores, and ICCs. In general, the PAI-SF scales were very precise in deriving score estimates produced by the longer, full-form clinical scales. Means and standard deviations were generally comparable. Further, difference scores (T scores) ranged from 0.1 (PAR, SCZ) to 1.6 (SOM), and all difference scores fell within the standard error of measure for each respective scale as reported by Morey (1991, 2007). Finally, ICCs between the short- and full-form scales all exceeded 0.90 and ve scales exceeded 0.95, which would indicate high agreement. Overall, these results would suggest that the PAI-SF clinical scales produce highly accurate estimates of their longer, full-form counterparts and are able to do so with considerably less burden. Finally, Table 3 also presents the percentage of PAI short- and fullform scales elevated above a T-score of 70, as well as the overall agreement (%) between short- and full-form scales in terms of whether they yielded this elevation or not. Differences across shortand full-form scales in terms of whether they yielded elevations of 70 or greater were generally small, ranging from 0.6% (MAN) to 6.2% (ARD). Further, the percent agreement between short- and full-form scales in terms of whether they yielded these elevations was generally high, and ranged from 90.5% (ANX) to 97.8% (MAN). In terms of clinical usage, these results further demonstrate short-form scale elevation congruence with the full form. 3.1. Implications and conclusion Although there has been a robust literature that has emerged reporting on the psychometric properties of the PAI full form (Deisinger, 1995; Boone, 1998; Holden, 2000; Morey, 2007), only recently have studies begun to come out examining the properties of the PAI-SF (Morey, 1991; Frazier et al., 2006). These studies have provided important initial data regarding the psychometric properties of the PAI-SF clinical scales. We also know of no studies that have examined the psychometric properties of the PAI-SF in an inpatient population, a population in which the PAI-SF may be particularly useful. Moreover, prior research has been conducted at the scale level, with no examination of item-level scaling assumptions. The present study contributes to the literature on the PAI-SF by addressing some of these gaps. Results presented here generally support the hypothesized item groupings of the PAI-SF. Corroborating previous research, the scales were found to be internally consistent and in many cases yielded reliability levels that exceeded other published studies (Morey, 1991; Frazier et al., 2006). Hypothesized item groupings were generally supported, as evidenced by the majority of items displaying strong linear relationships with their hypothesized scales. Items also tended to be stronger measures of their hypothesized scales when compared to their relationships with other scales. Two exceptions to this were MAN and BOR, both of which contained several items that were stronger measures of other constructs. This indicates some degree of item misspecication in this sample, and is something that should be examined further in future research. Scale-level correlations also

supported the hypothesis that there is unique and reliable variance being measured by each scale, with a few exceptions presented above. Despite the hypothesis that larger cross-sections of the sample would be scoring at the ceiling for these scales given the inpatient setting, no evidence for this hypothesis was found in the current study. Conversely, substantial oor effects were observed with ALC, limiting variability and the ability of this scale to differentiate among severity levels within this sample. In summary, this study sought to assess the structural validity of the PAI-SF using the multi-item, multi-trait analysis approach developed by Ware et al. (1997). Future investigations should incorporate other methods, including conrmatory factor analysis and item response theory, to better understand the t and dimensionality of the PAI-SF scales. Because both of these latter methods rely on very large sample size to produce stable results, they were not employed in the present study. It is also important to insert a cautionary note regarding the use of the PAI-SF in cross-cultural settings. The full form of the PAI has been translated into several different languages and the most frequently used is likely Spanish. Studies have demonstrated that while the reliability of the Spanish and English PAIs are nearly identical, some differences do indeed exist (Rogers et al., 1995). With this in mind, we caution clinical practitioners that the results of the present study should not be generalized to PAIs in any language other than English, until appropriate validity and reliability can be established on a languagespecic basis. Finally, the present study assessed the PAI-SF among participants who completed the entire 344-item form. Additional psychometric and predictive validity studies need to be conducted with subjects who only complete the rst 160 items. This approach will help clarify the impact of slightly differing rates of detecting valid and invalid proles, given the fact that the PAI-SF contains only three out of four of the full-form indexes. References
Baity, M.R., Siefert, C.J., Chambers, A., Blais, M.A., 2007. Deceptiveness on the PAI: a study of nave faking with psychiatric inpatients. Journal of Personality Assessment 88, 1624. Boone, D., 1998. Internal consistency reliability of the Personality Assessment Inventory with Psychiatric inpatients. Journal of Clinical Psychology 54, 839843. Braxton, L.E., Calhoun, P.S., Williams, J.E., Boggs, C.D., 2007. Validity rates of the Personality Assessment Inventory and the Minnesota Multiphasic Personality Inventory-2 in a VA medical center setting. Journal of Personality Assessment 88, 515. Campbell, D.T., Fiske, D.W., 1959. Convergent and discriminant validation by the multitraitmultimethod matrix. Psychological Bulletin 56, 81105. Deisinger, J.A., 1995. Exploring the factor structure of the Personality Assessment Inventory. Assessment 2, 173179. Frazier, T.W., Naugle, R.I., Haggerty, K.A., 2006. Psychometric adequacy and comparability of the short and full forms of the Personality Assessment Inventory. Psychological Assessment 18, 324333. Holden, R., 2000. Are there promising MMPI substitutes for assessing psychopathology and personality? Review and prospect. In: Dana, R.H. (Ed.), Handbook of Crosscultural and Multicultural Personality Assessment. Lawrence Erlbaum Associates, Inc., Mahwah, NJ, pp. 267292. LePage, J.P., Mogge, N.L., 2001. Validity rates of the MMPI-2 and PAI proles in a rural inpatient facility. Assessment 8, 6774. Morey, L.C., 1991. The Personality Assessment Inventory: Professional Manual. Psychological Assessment Resources, Odessa, FL. Morey, L.C., 1996. An Interpretive Guide to the Personality Assessment Inventory. Psychological Assessment Resources, Odessa, FL. Morey, L.C., 2007. Personality Assessment Inventory: Professional Manual, 2nd ed. Psychological Assessment Resources, Odessa, FL. Morey, L.C., Hopwood, C.J., 2004. Efciency of a strategy for detecting back random responding on the Personality Assessment Inventory. Psychological Assessment 16, 197200. Nunnally, J.C., Bernstein, I.H., 1994. Psychometric Theory, 3rd ed. McGraw-Hill, New York. Piotrowski, C., 2000. How popular is the Personality Assessment Inventory in practice and training? Psychological Reports 86, 6566. Rogers, R., Flores, J., Sewell, K., 1995. Initial validation of the Personality Assessment InventorySpanish Version with clients from Mexican American communities. Journal of Personality Assessment 64, 340348.

266

S.J. Sinclair et al. / Psychiatry Research 170 (2009) 262266 Steiger, J., Lind, J., 1980. Statistically-based tests for the number of common factors. Paper Presentation at the Annual Meeting of the Psychometric Society, Iowa City, IA. Ware, J.E., Harris, W.J., Gandek, B.L., Rogers, B.W., Reese, P.R., 1997. MAP-R Multitrait/ Multi-item Analysis Program Revised. The Health Assessment Lab, Boston, MA.

Shrout, P.E., Fleiss, J.L., 1979. Intra-class correlations: use in assessing inter-rater reliability. Psychological Bulletin 86, 420428. Siefert, C.J., Kehl-Fie, K.A., Blais, M.A., 2007. Detecting back irrelevant responding on the personality assessment inventory in a psychiatric inpatient setting. Psychological Assessment 19, 469473.

Você também pode gostar