Você está na página 1de 9

APPLIED NEUROPSYCHOLOGY, 18: 6168, 2011 Copyright # Taylor & Francis Group, LLC ISSN: 0908-4282 print=1532-4826 online

DOI: 10.1080/09084282.2010.523392

A Canonical Correlation Analysis of Intelligence and Executive Functioning


Andrew S. Davis, Eric E. Pierson, and W. Holmes Finch
Department of Educational Psychology, Ball State University, Muncie, Indiana

Executive functioning is one of the most researched and debated topics in neuropsychology. Although neuropsychologists routinely consider executive functioning and intelligence in their assessment process, more information is needed regarding the relationship between these constructs. This study reports the results of a canonical correlation study between the most widely used measure of adult intelligence, the Wechsler Adult Intelligence Scale, 3rd edition (WAIS-III; Wechsler, 1997), and the Delis-Kaplan Executive Function System (D-KEFS; Delis, Kaplan, & Kramer, 2001). The results suggest that, despite considerable shared variability, the measures of executive functioning maintain unique variance that is not encapsulated in the construct of global intelligence.

Key words:

executive functioning, intelligence, neuropsychology

The evaluation of executive functioning has a long history in professional and academic psychology and neurology (Baddeley & Della Sala, 1998; Lezak, Howieson, & Loring, 2004; Strub & Black, 2000). Its nature continues to be one of the most widely researched and debated topics in neuropsychology. Despite the extensive research conducted in this area and the clinical utility in conceptualizing executive dysfunction in specic patient populations, a precise and widely accepted denition remains elusive (Salthouse, 2005). The difculty in establishing a clear and consistent denition has led to a relatively slow process in understanding its relationship to other aspects of cognition. Historically, much of the literature surrounding executive functioning is limited to the types of measures and methods that individual research groups have used when operationalizing it (Burgess, 1997), and this continues to be a concern. For example, Salthouse (2005) evaluated the uniqueness of executive functioning by testing the degree to which his chosen measures of executive functioning and cognitive
Address correspondence to Andrew S. Davis, Ph.D., HSPP, Associate Professor of Psychology, Director of Doctoral Internships, Department of Educational Psychology, Teachers College Room 515, Ball State University, Muncie, IN 47306. E-mail: davis@bsu.edu

abilities were independent of age. The cognitive abilities included vocabulary, reasoning, spatial visualization, episodic memory, and speed. The results of his analysis suggested that the measures of the ability did not provide a signicant relationship with age that was independent of the cognitive abilities. It is important to note that Salthouse did not construct a unique factor for executive functioning but instead tested the individual performance of different measures of executive functioning within the model. As a result, it is not surprising that each of the individual tests had a signicant relationship to the factor structure already present and did not add appear independent of these measures. Many neuropsychologists continue to incorporate the Wechsler Intelligence Scales into neuropsychological and cognitive assessments (ODonnell, 2009). Although the relatively recent publication of tests of executive function with strong psychometric properties has increased the ease of measurement of executive functions in children and adults, it is common for researchers and practitioners to extrapolate decits in executive functions from performance on intelligence tests (Duff, Shoenberg, Scott, & Adams, 2005). The estimation of other abilities or psychological characteristics from

62

DAVIS, PIERSON, & FINCH

performance on intelligence tests has a long tradition in clinical practice (Iverson & Tulsky, 2003). This may be necessary because many patients with neurological and psychiatric problems demonstrate poor test-taking skills due to fatigue, reduced attention, motivation, and disorganization, which often necessitate reduced or brief assessment batteries. In addition, trends in reimbursement for services often encourages efciency in testing that may limit the number of instruments available (Delis, Kaplan, & Kramer, 2001). Given the relative lack of research on the functional relationship between executive functions and intelligence, hampered by the aforementioned problems with dening executive functioning, it is unclear as to the utility of extrapolation for these constructs. Thus, the purpose of the current study was to investigate the canonical relationship between core measures of executive functioning and the Wechsler Adult Intelligence Scale, 3rd edition (WAIS-III). Most denitions of executive functioning and intelligence incorporate higher order thinking (Carroll, 1993; Delis et al., 2001; Wechsler, 1997). One denition of executive functions is that it governs and coordinates other cognitive processes by activating, inhibiting, planning, or monitoring the performance of more basic cognitive abilities. Thus, executive functioning should inuence estimates of primary cognitive abilities when existing knowledge is required to solve novel problems, a primary task of the Wechsler intelligence tests. The goal of assessing discrete narrow-band abilities involves the creation of highly structured and organized samples of behavior. This process may reduce the ability to fully assess the broad nature and range of executive functions (Lezak et al., 2004). Unlike many of its competitors, the Wechsler Intelligence Scale allows for examinees to produce broader samples of behavior in the form of lengthier samples of speech and permits the direct observation of organization and sequencing which lend themselves to a clinical estimation of executive functioning. Because of its widespread use and strong psychometric properties, researchers have often included the Wechsler scale when exploring the structure of cognition. The connection between cognitive processing and executive functioning has been explored statistically. Boone, Ponton, Gorsuch, Gonzalez, and Miller (1998) included a shortened form of an earlier version of the Wechsler series of intelligence tests in an exploratory factor analytic study with several measures of executive functioning. They determined that a three-factor model was the best t and estimates for Verbal and Performance Intelligences loaded onto a common factor with two common measures of executive ability, the Rey-Osterreith and measures of the Auditory Consonant Trigrams. Other research has suggested that

Spearmans g and executive functioning are closely linked (Duncan, Johnson, Swales, & Freer, 1997; Duncan et al., 2008). The development and use of the Carroll-Horn-Cattell model (Carroll, 1997) of intelligence has also inuenced the discussion regarding the nature of executive functioning. For example, Rabbitt (1997) suggested many executive functioning tasks share common variance with Gf. This is also reected in the Woodcock-Johnson Test of Cognitive Abilities, 3rd edition (Woodcock, McGrew, & Mather, 2001), which taps the Gf factor with the subtests measuring planning, concept formation, and rule learning (Dean & Woodcock, 2003). Not all research has consistently supported the hypothesis that executive functions and general intelligence overlap. For example, correlational analysis has indicated limited relationships between subtests from the Wechsler scale and classic measures of executive function. For example, a study by Lamar, Zonderman, and Resnick (2002) found low correlations between the Similarities subtest and Letter Fluency (.14), Category Fluency (.12), Trails A (.05), and Trails B (.05). The only signicant positive correlations for Digit Span and measures of executive functioning were Digit Forwards and Letter Fluency (.19) and Digit Backwards and Category Fluency (.21). These ndings are in line with the work of Salthouse (2005), who found that executive functioning was related to reasoning and speeded tasks. Similarly, de Frias, Dixon, and Strauss (2006) found that there was a signicant relationship between performance on measures of executive functioning and uid intelligence as estimated by a lettersequencing task. In an investigation of the relationship between executive functions and reading, arithmetic, and nonverbal reasoning, van der Sluis, de Jong, and van der Leij (2007) found that arithmetic and reading were associated more with nonexecutive processing tasks than executive function tasks. An investigation of the predictive ability of executive functioning on global cognitive processing was conducted by Polderman et al. (2006). They conducted a longitudinal study of 237 twin pairs and studied 172 of them 12 years later. They found that executive functioning at age 5 was a poor predictor of IQ 12 years later. One of the reasons the literature remains unclear regarding the relationship between executive functioning and intelligence is the lack of studies that utilize multiple outcome measures of executive functioning (Lamar et al., 2002). This in turn may be an outgrowth of a tendency to narrowly operationalize a rather diverse set of abilities. In addition, a bias in the clinical literature that focuses primarily on populations with behavioral dysfunction has led to the analysis of clinical archival data where batteries have limited the number and type of tests that can be administered (see Duff et al., 2005,

CANONICAL D-KEFS AND WAIS-III

63

and Boone et al., 1998, for examples). In order to overcome this problem, the authors of the current study examined participants who completed the entire DelisKaplan Executive Function System (Delis et al., 2001). The D-KEFS are comprised of nine stand-alone tests chosen because of their ability to help in the clinical identication of specic problems in executive functioning. The authors of the D-KEFS argued that the instrument was not designed to reect a theoretical perspective in regards to executive functioning. This is in part because of the ongoing debate regarding its denition. However, they argued that it is critical to apply a process approach to understanding its nature. The D-KEFS allows for the measurement of many different types of measures of executive functioning with a common and nationally normed set of standardization tables. College students were enrolled in this study because of the continued development of executive functioning and intelligence in late adolescence (Reynolds & Horton, 2008). The recruitment of college students in this study is particularly benecial for several reasons. First, this is still in the period of time where executive functioning and the frontal lobes are continuing to develop. Second, understanding of the relationship between executive functioning and intelligence requires research with the general population as opposed to clinical populations who are more likely to introduce confounds such as comorbid conditions. Finally, college students have a wide range of abilities and cognitive proles. Given the previous work of Duncan and colleagues (1997, 2008), Rabbitt (1997), and Salthouse (2005) the authors sought to describe the presence of any signicant relationship that might be present between cognitive processing and executive functioning. In particular it was suspected that the WAIS-III measures of processing speed (Coding) and Gf (Matrix Reasoning) would strongly correlate with the measures of executive functioning.

a history of traumatic brain injury. Informed consent was obtained from all participants following approval of this study by the university Institutional Review Board. Instrumentation The D-KEFS (Delis et al., 2001) consists of nine stand-alone tests of verbal and nonverbal executive functioning. Although many of the tests are derivations of classic tests of executive functioning (e.g., Tower Tests, Trail Making Tests), the D-KEFS offers an impressive normative sample, improved standardization procedures, and a cognitive process approach to test interpretation. One score was chosen for each of the D-KEFS tests that the authors of this manuscript thought best represented the broad construct of executive functioning. The WAIS-III (Wechsler, 1997) is the third iteration of the most widely used adult intelligence test. The core WAIS-III subtests yield composite scores for Verbal Intelligence (VIQ), Performance Intelligence (PIQ), and a measure of global intelligence, or Full Scale Intelligence (FSIQ). The VIQ, PIQ, and FSIQ were chosen as broad measures of intelligence for the canonical analysis between executive measures and intelligence. The WAIS-III subtest that most clearly measures Gf (Matrix Reasoning) was used to compare this ability with the executive functioning measures. The core WAIS-III subtest measure that estimates processing speed (Coding) was compared with the measures of executive functioning. Procedure Participants were administered the D-KEFS and WAIS-III according to the procedures described in the test manuals. The examiners were advanced graduate students who had substantial training in neuropsychological and psychological assessment and in the administration and scoring of the D-KEFS and WAIS-III. Participants were administered both tests after completing an informed consent form approved by a university Institutional Review Board. Data Analysis Canonical correlation analysis was used in order to assess the strength and nature of the relationships between executive functions and intelligence. Canonical correlation is a statistical procedure specically designed to allow for the estimation of correlation coefcients between sets of variables. It is particularly useful in cases where the variables in a set represent a unied system but where directional relationships are not expected. The procedure works by nding linear combinations

METHODOLOGY Subjects Participants were 63 (35 males and 28 females) college students in a large Midwestern university and engaged in this study in return for extra credit in undergraduate psychology courses. They ranged in age from 18 to 38 years (mean age 19.98 years, standard deviation 3.8 years). The ethnicity of the sample was distributed as: Caucasian (93.7%), African American (3.2%), and Hispanic (3.2%), with a mean years of education of 13.76 and a standard deviation of 1.2. Two participants (3.1%) reported a history of learning disabilities, 5 participants (7.8%) reported a history of attention decit hyperactivity disorder, and 2 participants (3.1%) reported

64

DAVIS, PIERSON, & FINCH TABLE 1 Mean and Standard Deviation Statistics for the WAIS-III and D-KEFS Variable WAIS-III Composites Verbal IQ Performance IQ Full Scale IQ WAIS-III Subtests Vocabulary Similarities Arithmetic Digit Span Information Comprehension Picture Completion Digit Symbol-Coding Block Design Matrix Reasoning Picture Arrangement D-KEFS Test Variables Trail Making Test: Number=Letter Sequencing Verbal Fluency Test Category Switching: Total Design Fluency Test Switching: Total Correct Color-Word Interference Test: Completion Times: Inhibition=Switching Sorting Test Combined Conditions 1 2: Combined Description Score Twenty Questions Test: Total Weighted Achievement Score Word Context Test: Total Consecutively Correct Tower Test: Total Achievement Score Proverb Test: Total Achievement Score: Free Inquiry Mean SD

within the variable sets that maximizes the correlation among them. The results allow for both estimation of the correlation between the sets of variables and an indication of which variables contribute the most to each linear combination. Though canonical correlation is one of the least frequently used multivariate techniques, it is the appropriate strategy for evaluating the degree of relationship between multiple dependent and independent variables when the variables are continuous and there is no covariate (Tabachnick & Fidell, 2007). In addition, canonical correlation will allow for the explicit testing of the null hypothesis that the two sets of variables are independent of one another (Johnson & Wichern, 2002). Rejection of this null hypothesis indicates that the sets are indeed related, which is the primary research question addressed in this study. It is important to acknowledge that the sample size for this study is smaller than would be ideal. However, simulation studies examining the impact of sample size on the ability of canonical correlation to accurately estimate the relationships between sets of variables and to test for the signicance of these relationships has shown that samples as small as 50 to 60 are sufcient (Mendoza, Markos, & Gonter, 1978; Naylor, Lin, Weiss, Raby, & Lange, 2010). Nonetheless, we recognize that the sample sizeto-variable ratio is at the lower end of what researchers consider adequate for use in multivariate analysis (MacCalum, Widaman, Preacher, & Hong, 2001; Streiner, 1994). Given this fact, as well as the focus of the research questions on simple relationships between the sets of variables, the current study should be viewed as a preliminary analysis of the relationship between intelligence and executive functioning.

110.7 109.5 111.1 12.6 11.5 11.3 10.2 11.9 13.1 10.9 10.9 11.6 12.3 11.6 10.3 11.5 12.9 11.1 11.1 10.9 11.4 10.7 10.6

10.7 10.1 9.7 2.6 2.9 2.4 2.1 2.2 2.5 3.3 2.9 2.5 2.2 3.0 2.4 3.4 3.3 2.3 2.7 2.8 2.2 2.2 2.8

Note. WAIS-III composites have a mean of 100 and SD of 15. D-KEFS tests have a mean of 10 and SD of 3.

RESULTS Descriptive statistics for the scales used in this study appear in Table 1. The slightly above average pattern of means is consistent given that college students constituted the sample. Prior to conducting the canonical correlation analysis, the data were screened to ensure that the assumptions of the analysis were met. Normality of the variables was assessed using QQ plots for the individual variables, and all were found to conform very closely to the normal distribution. Outliers were assessed using leverage, with values greater than 2 (number of variables=sample size) or 0.38 indicative of outlying observations (Tabachnick & Fidell, 2007). One individual was found to be an outlier using this method. To assess their impact on the canonical correlation, an analysis was conducted excluding them. None of the results were different to ve decimal points, suggesting

that the outlier did not have any undo biasing effect on the canonical correlation. The results presented below include all of the subjects. Finally, linearity and homoscedasticity of the relationships among the variables were assessed using scatterplots, which veried that indeed relationships were linear for all pairs of variables and that the width of the scatters did not vary across the values of the variables. In order to ensure that there was no gender effect on the variables used in the canonical correlation analyses, multivariate analysis of variance (MANOVA) was used. Specically, the means of the WAIS-III composites, the WAIS-III subtests, and the D-KEFS subtests used in this study were compared between male and female subjects using three MANOVA models. There were no signicant gender differences for the D-KEFS (p .0871), the WAIS-III composites (p .289) or the WAIS-III subtests (p .2081). Therefore, it was determined that gender did not need to be controlled for in the estimation of the canonical correlations. Only the rst canonical correlation between the two sets of variables was found to be statistically signicant,

CANONICAL D-KEFS AND WAIS-III

65

with a value of 0.73 and a canonical R2 of 0.54. This latter value indicates that approximately 54% of the variation in one set of measures is accounted for by the other set. The canonical correlation was statistically signicant (p < .05) and falls into the large range as dened by Cohen (1988). The correlations between the individual measures and the overall canonical variate for their set can be analyzed to gain insight into the nature of each linear combination. A canonical variate represents the combination of the individual observed measures in a set, weighted based upon their relative importance in terms of maximizing the canonical correlation discussed previously. Given that the canonical variates are determined to maximize the canonical correlation between the two variables sets, the correlations between individual variables and these variates provide information regarding which of these variables were most strongly associated with the signicant canonical correlation. In other words, variables most highly associated with their respective canonical variate can be interpreted as most strongly associated with the correlation between the two variable sets. Please note that correlations greater than 0.32 were taken to indicate important variables in contributing to the canonical variates (Tabachnick & Fidell, 2007). An examination of Table 2 reveals that all of the WAIS-III variables were strongly associated with the canonical variate, with the FSIQ being the strongest contributor. This result is not surprising given the global nature of the FSIQ, because its construction is based upon shared and unique variance found in the VIQ and PIQ and thus represents the broadest measure of cognitive functioning. It is important to note that all

TABLE 2 Correlations Between Observed Variables and Their Canonical Variates Variable WAIS-III Composites Verbal IQ Performance IQ Full Scale IQ D-KEFS Test Variables Trail Making Test: Number-Letter Sequencing Verbal Fluency Test Category Switching: Total Design Fluency Test Switching: Total Correct Color-Word Interference Test: Completion Times: Inhibition=Switching Sorting Test Combined Conditions 1 2: Combined Description Score Twenty Questions Test: Total Weighted Achievement Score Word Context Test: Total Consecutively Correct Tower Test: Total Achievement Score Proverb Test: Total Achievement Score: Free Inquiry Correlation

three measures had correlations above 0.8, indicating their importance in the determination of the canonical correlation. The results in Table 2 also reveal that of the D-KEFS variables, the Word Context Test: Total Consecutively Correct had the highest correlation with the canonical variate, at 0.59. This test taps the examinees abilities to use deductive reasoning, hypothesis testing, and mental exibility to determine the meaning of nonsense words (Delis et al., 2001). The strong correlations are reasonable when it is considered that this test requires substantial verbal ability, a well-known construct essential for doing well on the WAIS-III. The Sorting Test Combined Conditions 1 2: Combined Description Score was also clearly associated with the canonical variate for this set, with a correlation of 0.49. Other D-KEFS variables that met the 0.32 criterion were the Proverb Test: Total Achievement Score: Free Inquiry, Design Fluency Test Switching: Total Correct, and the Trail Making Test: Number-Letter Sequencing. The other variables in this set had little or no association with the canonical variate and thus did not make a large contribution to the canonical correlation. In order to further explore the relationship between intelligence and executive functioning, a canonical correlation analysis was used in which the WAIS-III verbal and performance subtests were correlated as a group with the core D-KEFS measures used above. Only the rst canonical correlation between these sets of variables was statistically signicant (a 0.05), with a value of 0.85 and a canonical R2 of 0.73. The correlations between Matrix Reasoning and D-KEFS measures can be seen in Table 3. The correlations between the canonical variates and the individual subscales appear in Table 4. These results demonstrate that among the WAIS-III subtests, Vocabulary, Comprehension, and Matrix Reasoning had the strongest relationships with

.89 .81 .99 .32 .29 .39 .25 .49 .09 .59 .23 .42

TABLE 3 Correlations Between WAIS-III Matrix Reasoning Subtest and Core D-KEFS Measures D-KEFS Test Variable Trail Making Test: Number-Letter Sequencing Verbal Fluency Test Category Switching: Total Design Fluency Test Switching: Total Correct Color-Word Interference Test: Completion Times: Inhibition=Switching Sorting Test Combined Conditions 1 2: Combined Description Score Twenty Questions Test: Total Weighted Achievement Score Word Context Test: Total Consecutively Correct Tower Test: Total Achievement Score Proverb Test: Total Achievement Score: Free Inquiry

.10 .03 .14 .18 .40 .21 .22 .20 .37

p < .05.

66

DAVIS, PIERSON, & FINCH TABLE 4 Correlations Between Observed Variables and Their Canonical Variates

Variable WAIS-III Subtests Vocabulary Similarities Arithmetic Digit Span Information Comprehension Picture Completion Digit Symbol-Coding Block Design Matrix Reasoning Picture Arrangement D-KEFS Test Variables Trail Making Test: Number-Letter Sequencing Verbal Fluency Test Category Switching: Total Design Fluency Test Switching: Total Correct Color-Word Interference Test: Completion Times: Inhibition=Switching Sorting Test Combined Conditions 1 2: Combined Description Score Twenty Questions Test: Total Weighted Achievement Score Word Context Test: Total Consecutively Correct Tower Test: Total Achievement Score Proverb Test: Total Achievement Score: Free Inquiry

Correlation

.75 .46 .11 .12 .37 .75 .24 .24 .22 .60 .54 .09 .14 .25 .60 .33 .15 .50 .17 .71

the set of D-KEFS variables. In addition, Picture Arrangement, Similarities, and Information displayed correlations with their canonical variables above the threshold of 0.32 cited above. Conversely, the variables Arithmetic, Digit Span, Picture Completion, Digit Symbol-Coding, and Block Design were not strongly associated with the canonical variable for the WAIS-III. Among the D-KEFS measures, the Proverb and Color-Word Interference tests had the strongest relationships with the canonical variable. In addition, Word Context and the Sorting Test Combined also demonstrated correlations above the 0.32 cutoff value. The Trail Making Test, Verbal Fluency, Design Fluency, Twenty Questions, and Tower Tests were not associated with this canonical variable.

DISCUSSION In brief, the canonical correlation results demonstrate that the WAIS-III scores were strongly associated with the D-KEFS scores and the two sets of measure shared a large degree of variance. This is consistent with the previous work and writings of Duncan et al. (1997). It is also interesting to note that within the variable sets the WAIS-III subtests were more strongly associated

with their canonical variable than were the D-KEFS tests. This outcome may have been due to the fact that the WAIS-III intelligence tests are strongly associated with crystallized knowledge, whereas measures of executive functioning generally rely upon completion of novel tasks, which introduces more variability into measurement because of the fewer repetitions (Burgess, 1997; Duncan et al., 1997; Rabbitt, 1997). Although the scores on the D-KEFS demonstrated a weaker relationship with the canonical variate, the shared variance between the two sets was still signicant, as noted above. This relationship appears to indicate a high degree of overlap between executive functioning and broader higher-order cognitive processing, with 54% of the variance in the two sets being shared. From the perspective of the WAIS-III variables, this may be due to the harmonious processing relationship between executive functioning and cognitive processing. From the perspective of the D-KEFS variables, the strong relationship indicates that the group of executive function tests was measuring a similar construct to broad measures of intelligence. Furthermore, these results suggest the D-KEFS tasks measured additional factors besides traditional executive functions. This is commensurate with previous ndings that suggest overlap between executive and nonexecutive function measures (e.g., Lamar et al., 2002). For example, Obonsawin, Crawford, Page, Chalmers, Cochrane, and Low (2002) investigated the relationship between executive functioning and intelligence using the WAIS-R. They determined that tests of frontal lobe function and intelligence shared a signicant level of variance and concluded that they measured similar constructs. These results are also consistent with the previous work of Boone et al. (1998) where VIQ and PIQ commonly loaded on some but not with all measures of executive functioning. The fact that a strong relationship exists between executive functioning and cognitive processing is not surprising given the functional relationship between the two constructs and the necessity of executive functions to solve novel tasks as well as effectively acquire crystallized knowledge. Though the authors expected to nd a signicant relationship between both Digit Symbol Coding and Matrix Reasoning and the D-KEFS, this was not found. Though the correlation with Matrix Reasoning was signicant, the relationship between Digit Symbol Coding and the D-KEFS was not. This provides partial support for the work of Duncan and colleagues (1997, 2008), Rabbitt (1997), and to a limited extent, Salthouse (2005). Despite the signicant canonical correlation, the D-KEFS was found to be measuring constructs independent of intelligence; this was unsurprising given that the D-KEFS was constructed to measure executive functioning and this construct is noticeably absent from the WAIS-III composites. Although the degree of

CANONICAL D-KEFS AND WAIS-III

67

overlap between the measures was large, the weaker and diverse relationship of the D-KEFS variables to the canonical variate provides further evidence that some of the constructs associated with the D-KEFS are not measured by the WAIS-III. For example, the ColorWord Interference Test: Completion Times: Inhibition= Switching and the Tower Test: Total Achievement Score actually demonstrated inverse relationships. This is interesting when the well-established nature of these tests to assess inhibition, cognitive set-shifting, and planning is considered in regards to the supposition that these skills are essential for cognitive processing. The results of this study do suggest that clinicians should independently assess executive function skills when possible, especially when concerns exist given specic executive dysfunction constructs, although it may be possible to extrapolate an idea of broad executive functioning from the WAIS-III. The current study has a few limitations. The sample size, though adequate for estimating and testing the correlation between the variable sets of interest, is nonetheless smaller than ideal for many multivariate analyses including canonical correlation. As a result, the collection of a larger sample of data by the researchers in the future may allow for other types of multivariate analyses. It is important to recognize that the current sample size and its relatively low n:p ratio may make some of the ndings particularly related to the WAIS-III subtest analysis difcult to replicate. It may be that a larger sample size would allow for a more theoretically driven understanding of the relationship between cognition and executive functioning as suggested by McCloskey, Perkins, and Van Divner (2009) using conrmatory factor analysis. Future studies should focus on alternative scores from the D-KEFS, employ larger samples, and generalize this study to children and patients with neurological impairment. REFERENCES
Baddeley, A., & Della Sala, S. (1998). Working memory and executive control. In A. C. Roberts, T. W. Robbins, & L. Weiskrantz (Eds.), The prefrontal cortex: Executive and cognitive functions (pp. 921). New York, NY: Oxford University Press. Boone, K. B., Ponton, M. O., Gorsuch, R. L., Gonzalez, J. J., & Miller, B. L. (1998). Factor analysis of four measures of prefrontal lobe functioning. Archives of Clinical Neuropsychology, 13, 585595. Burgess, P. W. (1997). Theory and methodology in executive function research. In P. Rabbitt (Ed.), Methodology of frontal and executive function (pp. 81116). East Sussex, England: Psychology Press. Carroll, J. B. (1993). Human cognitive abilities: A survey of factoranalytic studies. New York, NY: Cambridge University Press. Carroll, J. B. (1997). The three-stratum theory of cognitive abilities. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 122130). New York, NY: Guilford.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Mahwah, NJ: Lawrence Erlbaum. de Frias, C. M., Dixon, R. A., & Strauss, E. (2006). Structure of four executive functioning tests in healthy older adults. Neuropsychology, 20, 206214. Dean, R. S., & Woodcock, R. W. (2003). Examiners manual. Dean-Woodcock Neuropsychological Battery. Itasca, IL: Riverside Publishing. Delis, D. C., Kaplan, E., & Kramer, J. H. (2001). Delis Kaplan Executive Function System: Examiners manual. San Antonio, TX: The Psychological Corporation. Duff, K., Schoenberg, M. R., Scott, J. G., & Adams, R. L. (2005). The relationship between executive functioning and verbal and visual learning and memory. Archives of Clinical Neuropsychology, 20, 111122. Duncan, J., Johnson, R., Swales, M., & Freer, C. (1997). Frontal lobe decits after head injury: Unity or diversity of function. Cognitive Neuropsychology, 14, 713741. Duncan, J., Parr, J., Woolgar, A., Thompson, R., Bright, P., Cox, S., . . . Nimmo-Smith, I. (2008). Goal neglect and Spearmans g: Competing parts of a complex task. Journal of Experimental Psychology: General, 137(1), 131148. Iverson, G. L., & Tulsky, D. S. (2003). Detecting malingering on the WAIS-III: Unusual Digit Span performance patterns in the normal population and in clinical groups. Archives of Clinical Neuropsychology, 18, 19. doi:10.1016/S0887-6177(01)00176-7 Johnson, R. A., & Wichern, D. W. (2002). Applied multivariate statistical analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall. Lamar, M., Zonderman, A. B., & Resnick, S. (2002). Contribution of specic cognitive processes to executive functioning in an aging population. Neuropsychology, 16, 156162. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment. New York, NY: Oxford University Press. MacCallum, R. C., Widaman, K. F., Preacher, K. J., & Hong, S. (2001). Sample size in factor analysis: The role of model error. Multivariate Behavioral Research, 36, 611637. McCloskey, G., Perkins, L. A., & Van Divner, B. (2009). Assessment and intervention for executive function difculties. New York, NY: Routledge. Mendoza, J. L., Markos, V. H., & Gonter, R. (1978). A new perspective on sequential testing procedures in canonical analysis: A Monte Carlo evaluation. Multivariate Behavioral Research, 13, 371. Naylor, M. G., Lin, X., Weiss, S. T., Raby, B. A., & Lange, C. (2010). Using canonical correlation analysis to discover genetic regulatory variants. PLoS ONE, 5(5), 16. Obonsawin, M. C., Crawford, J. R., Page, J., Chalmers, P., Cochrane, R., & Low, G. (2002). Performance on tests of frontal lobe function reects general intellectual ability. Neuropsychologia, 40, 970977. ODonnell, L. (2009). The Wechsler Intelligence Scale for ChildrenFourth Edition. In J. A. Naglieri & S. Goldstein (Eds.), Practitioners guide to assessing intelligence and achievement. Hoboken, NJ: John Wiley & Sons, Inc. Polderman, T. J., Gosso, M. F., Posthuma, D., Van Beijsterveldt, T. C., Heutnik, P., Verhulst, F. C., & Boomsa, D. L. (2006). A longitudinal twin study on IQ, executive functioning, and attention problems during childhood and early adolescence. Acta Neurological Belgica, 106, 191207. Rabbitt, P. (1997). Introduction: Methodologies and models in the study of executive function. In P. Rabbitt (Ed.), Methodology of frontal and executive function (pp. 138). East Sussex, England: Psychology Press. Reynolds, C. R., & Horton, A. M. (2008). Assessing executive functions: A life-span perspective. Psychology in the Schools, 45, 875892. Salthouse, T. A. (2005). Relations between cognitive abilities and measures of executive functioning. Neuropsychology, 19, 532545.

68

DAVIS, PIERSON, & FINCH van der Sluis, S., de Jong, P., & van der Leij, A. (2007). Executive functioning in children, and its relations with reasoning, reading, and arithmetic. Intelligence, 35, 427449. Wechsler, D. (1997). Wechsler Adult Intelligence Scale (3rd ed.). San Antonio, TX: Psychological Corporation. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock Johnson III Tests of Cognitive Abilities. Rolling Meadows, IL: Riverside Publishing.

Streiner, D. L. (1994). Regression in the service of the superego: The dos and donts of stepwise multiple regression. The Canadian Journal of Psychiatry/La Revue canadienne de psychiatrie, 39(4), 191196. Strub, R. L., & Black, F. W. (2000). The mental status examination in neurology (4th ed.). Philadelphia, PA: F. A. Davis Company. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. Boston, MA: Pearson.

Copyright of Applied Neuropsychology is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Você também pode gostar