Você está na página 1de 12

Rev.Phil.Psych. (2011) 2:7788 DOI 10.

1007/s13164-010-0041-0

Levels of Explanation Vindicated


Vctor M. Verdejo & Daniel Quesada

Published online: 25 September 2010 # Springer Science+Business Media B.V. 2010

Abstract Marrs celebrated contribution to cognitive science (Marr 1982, chap. 1) was the introduction of (at least) three levels of description/explanation. However, most contemporary research has relegated the distinction between levels to a rather dispensable remark. Ignoring such an important contribution comes at a price, or so we shall argue. In the present paper, first we review Marrs main points and motivations regarding levels of explanation. Second, we examine two cases in which the distinction between levels has been neglected when considering the structure of mental representations: Cummins et al.s distinction between structural representation and encodings (Cummins in Journal of Philosophy, 93(12):591614, 1996; Cummins et al. in Journal of Philosophical Research, 30:405408, 2001) and Fodors account of iconic representation (Fodor 2008). These two cases illustrate the kind of problems in which researchers can find themselves if they overlook distinctions between levels and how easily these problems can be solved when levels are carefully examined. The analysis of these cases allows us to conclude that researchers in the cognitive sciences are well advised to avoid risks of confusion by respecting Marrs old lesson.

Ever since Marrs influential work on the computational account of vision (Marr 1982) it has been a familiar idea that computational research can be taken to involve (at least) three levels of description/explanation. These are: the level of the function to be computed; the level of the algorithm that computes the given function; and finally, the level of realization of the function in hardware structures.
V. M. Verdejo (*) Department of Linguistics, Logic and Philosophy of Science, Universidad Autnoma de Madrid, Ctra. Colmenar Km. 15,6, Cantoblanco 28049 Madrid, Spain e-mail: Victor.verdejo@uab.cat D. Quesada Department of Philosophy, Universitat Autnoma de Barcelona, Campus UAB, Edifici B, Bellaterra 08193 Barcelona, Spain e-mail: Daniel.quesada@uab.cat

78

V.M. Verdejo, D. Quesada

This distinction between levels, which is perhaps best viewed as a fundamental point that clarifies successful explanations in cognitive psychology, seems to have been relegated to a rather dispensable remark in most of the contemporary literature in the philosophy of cognitive science. In our view, the tendency to unreflectively presuppose or even to simply ignore the distinction between levels in cognitive explanation has had dramatic consequences in the elucidation of crucial philosophical and psychological topics. In the present paper, we first briefly revisit the classic Marrian distinction in order to emphasize its main motivation and essence. As is well known, each level is supposed to explain a particular range of facts, but it must be emphasized that the full account of a particular phenomenon should be seen as an empirical account of that phenomenon at all levels of description/explanation. Second, we apply this lesson to two particular developments regarding the putative structure of mental representation: 1) the analysis of systematicity phenomena as proposed by Cummins et al. in both linguistic and nonlinguistic domains (Cummins 1996; Cummins et al. 2001, 2005); and 2) Fodors account of the structure of iconic representation (Fodor 2008, chap. 6). In both cases, the distinction between levels of explanation is carelessly neglected with very undesirable results. A final concluding remark will encourage theorists working in the cognitive sciences to specify and reflect on the particular level or levels of explanation at which their contribution is meant to figure.

1 Inter-level explanations As is well known, David Marr (1982, pp. 1929) provided a model for understanding proper explanation in cognitive research. According to that model, a cognitive process should be accounted for at three levels. Level 1 states the function that is computed (and why) what Marr called the computational theory. Level 2 specifies the algorithm that implements this function, together with the representations required for the algorithm to work on. Finally, level 3 explains the realization in hardware of the algorithm specified at level 2. Marr was not alone in distinguishing levels of explanation. For instance, Newell (1986) and Pylyshyn (1984) also introduced closely related level distinctions. On the other hand, Marrs levels raise several issues regarding, inter alia, the need of distinguishing further levels (e.g., Peacocke 1986), the individualistic (e.g., Segal 1989) or anti-individualistic (e.g., Burge 1986) nature of computational contents, or the possibility of a nonintentional conception of computational approaches (e.g., Egan 1996). For ease of exposition, we will abstract from all these polemical territories. Nonetheless, it is worth noting that we will be assuming that Marrs topmost level involves representational contents and therefore an intentional characterization of the Marrian function.1
1 On this assumption, Newells knowledge level and Pylyshyns semantic level are closely related to (and arguably can be identified with) Marrs level 1. We thank an anonymous referee for this journal for bringing this point to our attention. Among many others, authors that have provided an intentional/ semantic reading of Marrs topmost level include Bermdez (1995), Burge (1986), Davies (1991), Kitcher (1988), Morton (1993) and Shapiro (1997).

Levels of Explanation Vindicated

79

Now, it feels a bit odd to revisit Marrs lesson after all these years, but ongoing uses in cognitive science make the review very worthwhile. The review will consist of answering three questions: 1) Why do we need levels of description/explanation at all? 2) How should the explanations at each level be determined? 3) What order should cognitive research follow; or in other words, which level should be prior in the order of investigation? Refreshing our memories as regards these three questions will put us in a good position to apply the lesson Marr taught us to the two cases examined in sections 2 and 3. What is the point of levels of explanation in the first place? From time to time it may seem that Marrs celebrated account of levels could just be ignored in favour of real research. After all, how much we know about a particular phenomenon does not seem to depend on any previous methodological assumptions that we may consider as helpful. However, it is totally misleading to suppose that drawing the distinction between Marrian levels is, in essence, a mere methodological and gratuitous additional step. Taking levels into consideration in explanations responds to the objective complexity of cognitive phenomena. As Marr emphasized, each level has its own range of facts to be explained, which may in turn bear only loose relations to facts considered at other levels. These three levels are coupled, but only loosely. The choice of an algorithm is influenced for example, by what it has to do and by the hardware in which it must run. But there is a wide choice available at each level, and the explication of each level involves issues that are rather independent of the other two. Each of the three levels of description will have its place understanding of perceptual information processing, and of logically and causally related. But an important point to note three levels are only loosely related, some phenomena may only one or two of them [Marr (1982), p. 25]. in the eventual course they are is that since the be explained at

Locating a particular account within Marrs levels is a fundamental clarification task concerning which relevant aspects of a cognitive phenomenon are taken into account. How are the explanations at each level determined? It may strike one as obvious, but it is often forgotten, that it is part and parcel of Marrs conception of levels that the ultimateand hopefully the onlyway in which explanations at any of the levels are determined is by full-blooded empirical investigation. This also holds, of course, for Marrs computational theory or level 1. Functions that the mind computes are not a matter of mere a priori reflection and still less of common-sense intuition. Of course, a priori considerations and folk-psychological categories may well guide the statement of explanations at level 1. However, as in the other two levels, such a statement of explanations must, in the end, be empirically tested. Which level should come first? It has become quite usual to interpret Marrs model as providing some sort of heuristics for cognitive research, as if the

80

V.M. Verdejo, D. Quesada

consideration of level 1 should precede, as a matter of principle, research at the other levels. Marrwho, for instance, included a section titled Importance of computational theory (1982, p. 27)contributed perhaps greatly to give this impression. Moreover, some authors have explicitly endorsed the view that Marrs topmost level has methodological priority (e.g., Egan 1996; Segal 1989; Shapiro 1997). However, it is important to stress that the essence of Marrs levels is a point about explanation, not heuristicsas Garca-Carpintero (1995) emphasized. In other words, in Marrs distinction of levels there is no methodological constraint whatsoever about which level comes first. Therefore, which level takes precedence will depend on the specifics of the phenomena to be accounted for. What matters is that all three levels are levels at which an information processing device must be understood before one can be said to have understood it completely (Marr 1982, p. 24). Therefore, whatever the precise order followed in your preferred account of informational processes, Marrs fundamental contention is that it had better result in an account at all levels if it is to lead to full understanding of those processes.2

2 Systematicity and structure Authors from the connectionist tradition (Cummins 1996; Cummins et al. 2001, 2005) have vehemently argued that, even if systematicity phenomena in language require mental representation (MR from now on) with a certain specifiable structure, it is not the case that they require MR in a language of thought (LOT henceforth). More precisely, these authors argue that, whereas MR in a LOTor classical MRentails that MR shares structure with (is a structural representation of) the linguistic domain, all that linguistic systematicity requires is that MR preserves the structure of language, something that can be doneas Smolensky et al. (1992) demonstrated mathematically with tensor-product networksby means of MR that merely encodes (is an encoding of) linguistic structure. The distinction between MR that shares structure with language and MR that merely encodes that structure therefore becomes crucial. On the one hand, it shows that encodingand not only LOT MRis a good candidate for figuring in the account of primary (inputoutput) linguistic systematicity effects. []Paul Smolensky, Graldine Legendre, and Yoshiro Miyata have proven that for every classical parserthat is, a parser defined over classical representationsthere exists a tensor-product network that is weakly (input output) equivalent to the classical parser but does not employ classical representations. Thus, it appears that the classical explanation and the tensorproduct explanation explain the systematicity of thought equally well [Cummins et al. (2001), p. 170].
According to this, cognitive research may, consistently with Marrs programme, begin with the lowest implementation level. This is so even if it is not possible to make progress when research remains confined to this level; cf. Marrs analogy: trying to understand perception by studying only neurons is like trying to understand bird flight by studying only feathers: it just cannot be done (Marr 1982, p. 27).
2

Levels of Explanation Vindicated

81

On the other hand, it shows that, in the case of nonlinguistic systematicity, LOT MR can only encode the relevant structure at most (because LOT MR shares structure with language) and is therefore, on explanatory grounds, analogous to nonclassical representation. Since a tensor-product scheme like that employed by Smolensky, Legendre, and Miyata need not share structure with the domains it represents in order to account for primary systematicity effects, such a scheme could, in principle, account for primary systematicity effects in a variety of structurally distinct domains. The same point can be made about classical representation: since it can be used structurally to encode domains such as music that are structurally unlike language, a classical scheme could account for primary systematicity effects in nonlinguistic domains. Since classical and nonclassical structural encodings are evidently on equal footing in this respect, we conclude again that there is no sound argument from primary systematicity effects in language to classical representation [Cummins et al. (2001), p. 184]. In short, if Cummins et al.s considerations are sound, the classical argument from systematicity to structured MR in a LOT is certainly blocked. There are many aspects of Cummins et al.s developments that deserve careful attention. However, for present purposes, we would like to stress that these developments constitute a good example of the difficulties that a theorist may face when ignoring the distinction between levels of explanation. In particular, we will show that without such a distinction, Cummins et al.s own distinction between MR that shares structure with a given domain and MR that encodes that structure, is either unintelligible or else cognitively irrelevant. The dramatic consequence will be that their account amounts at best to a highly undesired oversimplification, and at worst to a serious misunderstanding of the systematicity issues they are concerned with. The key point made by Cummins and allies, to repeat, is that the debate between classicist and connectionist models of structure can be accounted for in terms of the difference between classical schemas of MR in which structure (S) is shared with a (linguistic) domain (D) and connectionist schemas of MR in which that (linguistic) structure is encoded rather than directly shared. On reflection however, it is not very clear what it is for a (schema of) MR to share S with D; and, correspondingly, what it is for a (schema of) MR to encode the S of D, since they define an encoding negatively as not being a sharing of structure (Cummins 1996, p. 599; Cummins et al. 2001, pp. 180 and 182). What does it mean then that MR shares S with D? Indeed, we submit that there are only two straightforward interpretations of the claim that MR shares S with D. According to the first, D consists of a set of cognitive states whose systematic properties are to be explained. However, if D consists of cognitive states, we cannot make sense of the idea that MR (which is of course MR of a cognitive state) shares S with the members of D because MR is already part of the cognitive states that it is allegedly sharing S with. According to this interpretation, we can say that MR shows or manifests the structure of D but not, intelligibly, that it shares S with D. Consider

82

V.M. Verdejo, D. Quesada

that our domain is the pair of sentences John loves Mary and Mary loves John. We come to know that those sentences are systematic variants of one another because, respecting grammatical constraints of combination, they are made out of the same constituents: John, Mary and loves. The scheme of representations in this case corresponds to words. Does the scheme of representationsconsisting of wordsshare structure with the domainconsisting of sentences? Not intelligibly. The right thing to say is that the scheme of representations shows or manifests the structure of the domain but not, meaningfully, that it shares structure with the domain. On the second straightforward interpretation, D consists of a set of external objects or properties. This interpretationwhich seems to be in accordance with the authors contention that language, colour, algebra and the capitals of twenty states in the USA are different domains (cf. Cummins et al. 2001)makes sense because now we do have two things whose structure we can compare. However, there is a price to be paid: we are evaluating the structure of MR in relation to extra-mental objects or properties. That is, we are presupposing that, in order to explain cognitive systematicity phenomena, MR must preserve noncognitive structure, which is certainly at odds with Cummins resistance to LOT representational schemes on the basis that, instead of being derived from the systematicity of language, the systematicity of thought ought to depend only on the structure of the mind (Cummins 1996, p. 595). If language as such is no good in elucidating the structure of the mind, extra-mental objects generally are no good either, and for exactly the same reason. In short, the distinction between the sharing and encoding of structure now makes sense but is cognitively irrelevant. What Cummins et al. need in order for their distinction to work is to consider levels of explanation. Following this approach, we can assume that the members of D are cognitive states as accounted for at a high level of description. Thus, the relation of sharing S can be taken to hold between that high level of description and a lower level of description. The consideration of two different levels of description with a given structure Swhich can be seen as roughly corresponding to Marrs level 1 and level 2makes the distinction between MR that shares S with D and MR that encodes S of D both intelligible and cognitively relevant. It is intelligible because we now have two different levels whose structure can meaningfully be shared. It is cognitively relevant because we are evaluating the structure of cognitive states as such (and not as dependent on outer reality). To illustrate, following paradigmatic (linguistic) characterizations of systematicity, the high level function will be based upon cases such as that anyone capable of producing/understanding John loves Mary is also capable of producing/ understanding Mary loves John. More precisely, the (Marrian) topmost function will be a function that goes from a given propositional representation (e.g., the representation that John loves Mary) to its systematic variants obtained through recombination of constituents (e.g., the representation that Mary loves John). The structure identified at this high level is linguistic structure: the constituents of a propositional representation are the grammatical constituents of its associated

Levels of Explanation Vindicated

83

sentence.3 Now, at the algorithmic level (Marrs level 2), we can meaningfully postulate representations that share or, alternatively, that merely encode the structure found at the highest level. For instance, algorithmic representation in terms of LISP programming language just mimics linguistic structure and accounts for systematicity phenomena via programmed permutation of linguistic constituents (in terms of car, cdr and cons operations). In contrast, algorithmic representations consisting of tensor-product networks or Gdel numbering are instances of representations that merely encode linguistic structurevia superimposable activation vectors or via products of uniquely factorable numbers (see Cummins et al. 2001 for details). The systematicity function at level 1 would in this case be implemented by mathematical operations. Thus, the claim that LISP representation shares structure with high-level representation (while tensor-product networks or Gdel numbering do not) is now both intelligible and cognitively relevant. Notoriously, the analysis in terms of Marrian levels helps to clarify (part of) the real disagreement between the LOT and the connectionist explanations of cognitive systematicity: they both agree that certain cognitive states are systematic because of (empirically tested) facts having to do with the structure of those states identified at a high level of description. However, they disagree as to whether that structure at a high level of description is reproduced in MR at a lower level of description. The failure to see the relevance of the distinction between levels thus amounts to a fatal misunderstanding of the connectionist/ classical MR debate. The debate is about what the proper inter-level explanatory relations really are, and not only about what particular algorithmic representation would explain systematicity. This has been clear at least since Fodor and Pylyshyns (1988) original presentation of the challenge for connectionist developments, to wit: either connectionist networks cannot explain systematicity or, if they do, they are implementations of LOT models. However, if this is a challenge at all, to grant the appeal of representational schemes that do not share their structure with languagesuch as a suitable sort of encodingis not to grant that the representational scheme at hand cannot be considered to consist of LOT representation. The key point is precisely that high-level phenomenasuch as, paradigmatically, systematicitymay have more than one lower-level implementation. Something must be said therefore to justify Cummins et al.s outright presupposition that encodings do not implement a LOT. Cummins et al. may be right that connectionist explanations of systematicity are on an equal evidential footing with LOT explanations. However, our point here is that they have not even started to show that this is indeed the case. Our diagnosis is clear: they fail to distinguish levels of explanation and in so doing they make it
3

Note that, in this context, the specified Marrian function is not merely a function in extension, that is, it not only defines an input/output relation. Systematicity phenomena require also the identification of the (linguistic) structural information that makes an input/output relation a systematic relation. A function that, e.g., goes from the representation of Rab to the representation of Rba would not capture systematicity if Rab and Rba were primitive representations in the system. One such intensional function can be seen as a Marrian function that not only states what the system does but also why the system does it. In Peacockean terms, the function is seen as specifying, at Peacockes level 1.5, the information on which the algorithm draws (cf. Peacocke 1986).

84

V.M. Verdejo, D. Quesada

impossible to correctly identify the fundamental traits of the dialectics they are concerned with. As we shall see, Cummins et al. are not alone in underestimating the need to distinguish between levels in cognitive explanation.

3 Structure in iconic representation The second case on which we wish to focus is Fodors recent articulation of the view that iconic representation is of a distinctive kind (Fodor 2008). In particular, and unlike discursive or linguistic representation, icons are said to be a homogeneous kind of symbol from both the syntactic and the semantic point of view (Fodor 2008, p. 174). By introducing this kind of representation, Fodor aims to show that there is empirical evidence for a kind of nonconceptual representation in perceptual experience. Similarly to the case of Cummins et al.s developments, it is our view that Fodors strategy in accomplishing his aim is utterly inadequate from the outset. Furthermore, in this case also the mistaken approach results in undesired results as regards the intended line of reasoning. Fodor may be right that there is a distinctive kind of representation whose nature is nonconceptual; the point we wish to make here is that even so, Fodors attempt is not successful because he simply ignores the need to distinguish between levels of explanation in his account of the structure of a mental representation. Fodor takes pictures to be paradigmatic of iconic representation. For the familiar facts having to do with systematicity and productivity, Fodor claims that iconic representation, like discursive representation, is compositional (cf. Fodor 2008, pp. 171 and 173).4 As it happens, the kind of compositionality at stake in the case of iconic representation is homogeneous, that is, it offers no way of distinguishing canonical parts from mere parts. Thus, Fodor describes iconic compositionality as follows: Picture principle: If P is a picture of X, then parts of P are pictures of parts of X. Pictures and the like differ from sentences and the like in that icons dont have canonical decompositions into parts, all the parts of an icon are ipso facto constituents. [Fodor (2008), p. 173, his emphasis] Even if this characterization of the structure of an icon may have some intuitive appeal, it is, we submit, certainly unsatisfactory. There are two reasons for this. Firstly, Fodors picture principle misleadingly suggests that an icons structure is dependent upon the structure of some worldly X. This is because, unlike Cummins et al.s account of the relation of MR sharing S with D, the relation of MR
4

This is, on reflection, very striking. The distinctive feature of pictures is that they are structurally homogeneous symbols (see below). But it is very controversial to suppose that homogeneous symbols that is, symbols with no canonical constituentscan figure in systematicity phenomena, because what systematicity demands, if anything, is that the distinctive (semantic-plus-syntactic) contribution of parts of a symbol can be identified in a large number of other symbolssay, the semantic-plus-syntactic contribution of a found in Fa, Ga, Rab, Rba,). That is impossible if, as homogeneity demands, symbols do not have distinctive (semantic-plus-syntactic) parts. This puzzle, however, does not affect our present discussion.

Levels of Explanation Vindicated

85

iconically representing/being a picture of X is clearly a relation between (part of) a cognitive state and some extra-mental reality, albeit described quite abstractly. Thus, a first problem with the picture principle is the characterization it offers of an icons structure; namely, it says that an iconic representation of X is structurally constituted by homogeneous iconic representations of parts of X. In other words, the parts of an icon are described by reference to the parts of some X. One very undesirable consequence of this characterization of the structure of an icon is therefore that it is apparently dependent upon an extra-mental reality. It may be true that the iconic representation of, say, a giraffe, is such that the parts of the icon represent parts of the giraffe. It does not follow, and it is certainly false, that the structure of the (mental) picture of a giraffe is defined in terms of the structure (whatever that may be) of the giraffe. To put it crudely, the giraffe consists to a large extent of molecules of water, whereas the structure of the picture of the giraffe is determined, at the very minimum, by the relevant sort of computational process that leads to the pictures configuration.5 In short, the relation of iconically representing X in the sense of the picture principle is, as it stands, useless in the account of the icons structure as such.6 The second and more important reason for questioning Fodors characterization of the structure of an icon is that it is patently ambiguous as to whether it belongs to a high-level or to an algorithmic-level of explanation. In being ambiguous about this, it runs the risk of being absolutely irrelevant to the conceptualist/nonconceptualist debate to which it purportedly contributes. According to Fodor, the candidates for iconic representation are structurally homogeneous representations through which subjects subpersonally register certain information that is personally available to them only when their conceptual equipment is put to work. Examples include Sperlings (1960) recognition of tachistoscopically presented letters and Julesz (1971) illusion of three-dimensionality in stereoscopically presented dots. However, the debate surrounding nonconceptual content is (primarily) a debate about content at a high and personal level of description; a level that is capable of entering into the account of a subjects intentional behaviour. Fodors failure is thus a failure to
5

Examples of accounts of the structure in visual perception abound in cognitive science. To mention just one, Hummel (2001) offers a synthetic account in terms of static and dynamic binding. In Hummels model, only when dynamic binding (which is understood as requiring attention) is involved does explicit representation of (parts of) objects and their spatial relations take place. When the visual information is processed quickly, no explicit representation of the structure is used by the algorithm. The present point is that, which structure (if any) a given representation actually has, is never accounted for in terms of outer reality, but only in terms of inner computational processes. Fodors apparent mistake seems thus to be Pylyshyns vigorously denounced intentional fallacy, that is, the fallacy of attributing properties of what is being represented to the representation itself (as if our representation of a red square were itself red and square). Yet so long as we assume that the form of some mental representation must account for the content of the perceptual experience we are inevitably led to postulate a picture-like representation to match a picture-like experience (Pylyshyn 2007, p. 122). 6 Not of course in the account of the icons structure as a representation of X. The problem is that the specification of the structure of a given representation is something over and above what the representation represents. The point is even clearer in the case of discursive representation. When we say that the discursive representation John loves Mary is constituted by John, loves and Mary it is not on the ground that John loves Mary represents John, the relation of loving and Mary. In fact, that this is exactly what John loves Mary actually does represent is perfectly compatible with this discursive representation having some other constituents; indeed, it is compatible with the discursive representation being primitive.

86

V.M. Verdejo, D. Quesada

appropriately link the empirical cases to which he appeals at a subpersonal (roughly algorithmic) level with the kind of (nonconceptual) high-level content that he needs in order for his account to bear on the debate between conceptualists and nonconceptualists.7 In fact, Fodor fails to see that the characterization of an icons structure that he offers fatally confuses the high-level and the low-level account of that structure. This becomes perfectly clear if we consider that it does not even begin to follow from the picture principlethat is, from the principle that if P is a picture of X, then parts of P are pictures of parts of Xthat icons are such that all the parts of an icon are ipso facto constituents, i.e., that they are structurally homogeneous symbols. For instance, a system that delivers a picture of a red ball respects the picture principle. However, the system may consist of the combination of the information from two detectorsone for colour and another for shapeor else it may just take a picture of the object. In the latter case the representation is structurally homogeneous, but in the former, it is not. Clearly, what Fodor does is to misleadingly conflate a point about the algorithmic structure of a mental iconic representation with considerations regarding a claim about the intuitive high-level characterization of that representation. Fodors troubles can be avoided by quite a simple explanatory remark; one that introduces levels of explanation. Thus, a much more adequate account of the structure of an icon would take the picture principle in the context of the specification of a high-level cognitive function (a computational theory in Marrs terminology) from whatever proximal stimuli are involved in the identification of elements in a systems environmentsome Xs as primitively identified in the perceptual mechanismto an iconic (nonconceptual) representation of those elements in accordance with the principle. In addition, a way of performing the required function so that parts of an icon represent parts of an X, is via structurally homogeneous symbols at an algorithmic level. The key point would be that lowlevel structurally-homogeneous representation involves the existence of nonconceptual high-level content as defined by the picture principle. What Fodor needsand
7

As an anonymous referee rightly points out, since Bermdez 1995 paper, the debate surrounding nonconceptual content has also taken into account nonconceptual content at a subpersonal level. This fact, however, does not affect the point just made about Fodors failure to link subpersonal cases with intentional content of perceptual experience. For one thing, Fodors contribution to the debate aims to offer an empirical argument against the a priori approaches of philosophers in the Sellars tradition (Fodor 2008, p. 193). That traditionbe it on the conceptualist or on the nonconceptualist sidehas invariably been concerned with conceptual/nonconceptual contents of personal perceptual experience. For another thing, the conceptualist/nonconceptualist debate seems far from being settled even if we accept the existence of genuine nonconceptual content at a subpersonal level. This seems to be Bermdez own view, who acknowledges for instance that it may well turn out that the nonconceptual content of subpersonal information systems is rather different from the nonconceptual content of perception (Bermdez 2007, p. 69). Finally, we are here following Toribio (forthcoming), who presents the so-called subpersonal worry, that is, the worry that Fodors line of argument is threatened by considering that iconic representation can only be subpersonal algorithmic representation, and hence, over and above the neo-Fregean conceptualist position focused on the nature of content from the subjects own point of view. Note also that, under the assumption that Marrs level 1 is intentional (see section 1 above), the high and personal level of description can be accounted for in terms of Marrs topmost level (thereby arguably equating in this context Marrs level 1 with Newells knowledge level or Pylyshyns semantic level).

Levels of Explanation Vindicated

87

what he is far from providingis therefore a sound inference from semantic/ syntactic features of low-level MR to semantic/syntactic features of high-level representation as captured by the picture principle. Finally, it would be crucial to look for good empirical evidence that such structurally-homogeneous representations do, in fact, exist.8 Note that the proposed account frees us from the problems just discussed. On the one hand, the relation between icons as mental representations, and extramental realities is clarified as a cognitive function leading from primitively identified elements in the environment to their iconic representation. In particular, this is not an account of the icons structure in terms of outer realities. On the other hand, it becomes manifest why empirical research has a bearing on the determination of the nature of high-level (nonconceptual) content; that is, it becomes manifest that a low-level finding of structurally homogeneous representation would confirm and explain the high-level characterization of icons via the picture principle.

4 Conclusion There are surely many other examples in which the consideration of levels of explanation would make discussion in cognitive science much more precise and revealing. Here we have provided a detailed analysis of two particular cases in order to suggest that the distinction between levels is especially illuminating when we are faced with the task of elucidating the structure of mental representations. In the first case, the distinction between levels is needed in order for Cummins et al.s key notion of MR sharing structure S with a domain D to be both intelligible and cognitively relevant. In addition, the consideration of levels of explanation shows that the assumption that encodings are not some form of LOT representation does not take account of the classical challenge to connectionist models. In the second case, we have shown that, without an analysis in terms of high and low levels of description, Fodors notion of MR iconically representing X is of very little use in accounting for the structure of iconic representation. Absent such an analysis, the notion suggests an unpalatable dependence upon extra-mental realities in the elucidation of that structure and, more importantly, it messily conflates high-level features of iconic (nonconceptual) representation with empirical data regarding a particular algorithmic implementation of that representation. One lesson to be learned from all this is that theorists should be aware of which level they are working at so that our research into cognitive phenomena is as clear and precise as it actually can be.
8

It seems clear to us that, even if Fodor is right that some iconic representation is structurally homogeneous, this may be far from being the general case. To mention one well-known example, according to Marr (1982), representations in the earliest stage of visual processing consist of primal sketches which are certainly structurally heterogeneous: they involve the processing of geometrical information and intensity changes in light from the two-dimensional retinal image so as to detect edges, bars, ends and blobs. Therefore, as far as Marrs classic model is concerned, there are distinctive semantic/ syntactic parts in iconic representation right from the start.

88

V.M. Verdejo, D. Quesada

Acknowledgements We would like to thank Josefa Toribio, Christopher Evans and two anonymous referees for their helpful comments and suggestions on earlier drafts. This research has been partially funded by the MICINN, Spanish government, under the research project FFI2008-06164-C02-02, the CONSOLIDER INGENIO 2010 Program, grant CSD2009-0056, and the Catalan government, via the consolidated research group GRECC, SGR2009-1528.

References
Bermdez, J.L. 1995. Nonconceptual content: From perceptual experience to subpersonal computational states. Mind & Language 10(4): 333369. Bermdez, J.L. 2007. What is at stake in the debate on nonconceptual content? Philosophical Perspectives 21(1): 5572. Burge, T. 1986. Individualism and psychology. Philosophical Review 95: 345. Cummins, R. 1996. Systematicity. Journal of Philosophy 93(12): 591614. Cummins, R., J. Blackmon, D. Byrd, P. Poirier, M. Roth, and G. Schwarz. 2001. Systematicity and the cognition of structured domains. Journal of Philosophy 98(4): 167187. Cummins, R., J. Blackmon, D. Byrd, A. Lee, and M. Roth. 2005. What systematicity isnt: Reply to Davis. Journal of Philosophical Research 30: 405408. Davies, M. 1991. Individualism and perceptual content. Mind 100: 461484. Egan, F. 1996. Intentionality and the theory of vision. In Perception, ed. K. Akins, 232247. Oxford: Oxford University Press. Fodor, J.A. 2008. LOT2: The language of thought revisited. Oxford: Oxford University Press. Fodor, J.A., and Z. Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition 28: 371. Garca-Carpintero, M. 1995. The philosophical import of connectionism: A critical notice of Andy Clarks associative engines. Mind & Language 10(4): 370401. Hummel, J.E. 2001. Complementary solutions to the binding problem in vision: Implications for shape perception and object recognition. Visual Cognition 8: 489517. Julesz, B. 1971. Foundations of cyclopean perception. Chicago: Univ. of Chicago Press. Kitcher, P. 1988. Marrs computational theory of vision. Philosophy of Science 55: 124. Marr, D. 1982. Vision. San Francisco: Freeman. Morton, P. 1993. Supervenience and computational explanation in vision theory. Philosophy of Science 60: 8699. Newell, A. 1986. The symbol level and the knowledge level. In Meaning and cognitive structure, ed. Z.W. Pylyshyn and W. Demopoulos, 3139. Norwood: Ablex. Peacocke, C. 1986. Explanation in computational psychology: Language, perception and level 1.5. Mind & Language 1: 101123. Pylyshyn, Z.W. 1984. Computation and cognition. Cambridge: MIT. Pylyshyn, Z.W. 2007. Things and places. Cambridge: MIT. Segal, G. 1989. Seeing what is not there. Philosophical Review 98(2): 189214. Shapiro, L.A. 1997. A clearer vision. Philosophy of Science 64(1): 131153. Smolensky, P., G. Legendre, and Y. Miyata. 1992. Principles for an Integrated Connectionist/Symbolic Theory of Higher Cognition. Institute of Cognitive Science, University of Colorado, Technical Report 9208. Sperling, G. 1960. The information available in brief visual presentations. Psychological Monographs, 74(11): 129. Toribio, J. (forthcoming). Compositionality, Iconicity and Perceptual Nonconceptualism. Philosophical Psychology.

Você também pode gostar