Você está na página 1de 23

Human Studies (2006) 29: 3355 DOI: 10.

1007/s10746-005-9010-5

Springer 2006

Phenomenology-Friendly Neuroscience: The Return To Merleau-Ponty As Psychologist


RALPH D. ELLIS
Clark Atlanta University, Atlanta, GA 30314, (E-mail: ralphellis@mindspring.com)

Abstract. This paper reports on the Kuhnian revolution now occurring in neuropsychology that is nally supportive of and friendly to phenomenology the enactive approach to the mindbody relation, grounded in the notion of self-organization, which is consistent with Husserl and Merleau-Ponty on virtually every point. According to the enactive approach, human minds understand the world by virtue of the ways our bodies can act relative to it, or the ways we can imagine acting. This requires that action be distinguished from passivity, that the mental be approached from a rst person perspective, and that the cognitive capacities of the brain be grounded in the emotional and motivational processes that guide action and anticipate action affordances. It avoids the old intractable problems inherent in the computationalist approaches of twentieth century atomism and radical empiricism, and again allows phenomenology to bridge to neuropsychology in the way Merleau-Ponty was already doing over half a century ago.

One of the most striking impressions of an initial reader of Maurice MerleauPontys early works, Phenomenology of Perception (1941/1962) or The Structure of Behavior (1942/1967) is the careful attention both books give to neurophysiological and empirical psychological detail. Indeed, a glance at the reference list of the books just mentioned reveals that both cite considerably more physiology and psychology than they do philosophy. Yet philosophical commentators have tended to neglect Merleau-Pontys emphasis on psychology, especially neuropsychology, while neuroscientists until very recently had tended to be unaware of Merleau-Pontys important contributions to their own eld. In fact, this neglect of Merleau-Ponty in neuropsychology was due to a philosophical trend: both psychology and philosophy of mind after Merleau-Pontys death became inclined to the view that mental processes are reducible to lower-level mechanistic processes, without reference to the self-organizational dimension stressed by Merleau-Ponty, and seemingly almost to the exclusion of phenomenology altogether. But the good news is that neuroscience is now swinging back the other way, and is again becoming phenomenology-friendly. Neuropsychologists during the past decade have rediscovered the importance of self-organization and self-energized movement in biology, and the philosophy of mind has again begun to recognize

34

R. D. ELLIS

that the differences between consciousness and the unconscious informationprocessing of computers stem from the fact that conscious beings understand their world by initiating action and then taking note of environmental actionaffordances; that is, we understand what kinds of actions could be afforded by a given object, just as an infant understands when it nds an object that affords sucking. This emphasis on imagined actions toward objects is very different from an understanding of consciousness based on merely passively responding to perceptual inputs in stimulus-response fashion. Francisco Varela et al. (1991/1993) dubbed this new turn the enactive approach to consciousness and cognition, specically acknowledging their debt to Merleau-Ponty, and Varela along with Shaun Gallagher founded a new journal, Phenomenology and Cognition, whose purpose is to re-integrate phenomenology with cognitive science by means of this enactive approach. Others, such as Andy Clark (1997) and Natika Newton (1996) have used the term embodied including the notion that our bodies grasp the world by acting on it rather than just by reacting to itand others, for example, Ralph Ellis (1995, 2005), Stuart Kauffman (1993), and Ellis and Newton (2000), have used the term self-organizational, which emphasizes Merleau-Pontys idea that psychophysical forms maintain their pattern or structure while actively replacing their own material constituents (1942/1967: 47). In short, neuroscientists are again recognizing that, just as Merleau-Ponty had suggested, conscious and mental processes are possible only because there is a genuine difference between the living and the non-living, between the active and the merely reactive, and between K rper (the body as mere object for o perception) and Leib (the animated body). Thus the revolution now occurring in the neurosciences and cognitive theory has revived interest among psychologists in Merleau-Pontys understanding of consciousness and the mind-body relation. Consciousness is again considered to be an important dimension of mind as opposed to a mere appendage or epiphenomenon. Consciousness is not just an extra layer superimposed over physiological information processing, enabling us to be aware of what is going on in a computer-like subconscious mechanism. Instead, it is now recognized that conscious, living beings process information very differently from nonconscious and non-living systems, and that consciousness drives and organizes the process rather than being a mere causal by product or spinoff for example, see Antonio Damasio (1994, 1999); Ellis (1996, 2005), Jaak Panksepp (1998), Ariel Mack and Irvin Rock (1998), Thomas Natsoulas (1993), and Newton (1996). This means that conscious processes again must be seen as self-organizing phenomena resembling Merleau-Pontys psychophysical forms. So, in opposition to earlier mainstream Twentieth Century psychology, the mind-body relation is now increasingly thought to involve a priority of the process over its own substratum. This reverses the picture from the analytic and empiricist schools of the Twentieth Century, which regarded processes

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

35

as higher-level patterns of activity that were supposedly derivative from the properties of their micro-level constituents. It was thought that all the causal work was done at the lower level of organization (the micro-level), and that processes formed at higher levels of organization were mere epiphenomena or causal spin-offs of the lower-level phenomena; the higher-level processes were to have no real causal power of their own, so that the direction of causation was always completely bottom-up. In the new enactivist approach consistent with Merleau-Ponty the living process is not merely driven by lower-level, mechanical events; on the contrary, it seeks out, appropriates, and replaces the micro-constituents needed to keep the process going. There is a top-down rather than only a bottom-up causation at work, and correlatively, our minds again can have the power to move our bodies. As in MerleauPontys work, attempts are again underway to understand purposefulness as more than merely an organisms consciousness of its own internal billiard-ball reactions to linear causal mechanisms. For example, Kauffman (1993) and Earl Mac Cormack and Maxim Stamenov (1996) emphasize this point, and Shaun Gallagher and Anthony Marcel (1999), Newton (1996) and Varela et al. (1991/1993) attempt to distinguish intentions from mere tendencies, and actions from mere reactions. The hottest topics in current neuroscience are emotion, agency, the meaning of personhood, and consciousness. Inuential theorists who have directly acknowledged inuence by Merleau-Ponty include Varela et al. (1991/1993), Esther Thelen and Linda Smith (1994), Eugene Gendlin (1962/1997, 1992), Natsoulas (1993), Newton (1996), Kathleen Wider (1997), and Maxine Sheets-Johnstone (1999). Moreover, the recent trend toward dynamical systems theory traces through Kauffman (1993) back to the biochemist Jacques Monod (1971), who in turn was inuenced by Merleau-Ponty on the issues discussed in the present paper. Interest in these areas has now inltrated almost all cognitive psychology and philosophy of mind journals, and several have devoted themselves almost entirely to the integration of phenomenology with cognitive science, notably Consciousness & Emotion, Phenomenology and Cognition, and Journal of Consciousness Studies. We are thus reaching a point where neurophysiology is reconcilable with phenomenology. This was Merleau-Pontys early hope not merely to reject physiological psychology, but to integrate it with the experience of the living body. Fred Wertz has well summarized Merleau-Ponty on this point: perception is possible only because the body, neither a thing closed within itself nor an unextended idea, is sensitive to things as one among them in their very order (1987: 120). The body as subject must be reconciled with the body as part of the experienceable world. Merleau-Ponty emphasized that the solution to the mind body problem hinged on a concept of psychophysical forms (Merleau-Ponty, 1942/1967), which he dened in this way:

36

R. D. ELLIS

Forms. . . are dened as total processes whose properties are not the sum of those which the isolated parts would possess. . . . We will say that there is form whenever the properties of a system are modied by every change brought about in a single one of its parts and, on the contrary, are conserved when they all change while maintaining the same relationship among themselves. (1942/1967: 47) For Merleau-Ponty, consciousness is a further development of life, which is self-organizing in the sense that it must appropriate the needed material substrata to maintain its patterns of living, rather than merely having their forms be caused by those substrata. This enables life and consciousness to have causal powers that are not reducible to the sum total of the causal powers of their material constituents, and it reconciles scientic causality with MerleauPontys claim that The relations between the organism and its milieu are not relations of linear causality but of circular causality (1942/1967: 15). Philosophers of mind are now recognizing that the causal power of a process to organize its own components does not conict with what Jaegwon Kim (1992) calls the causal closure of the physical realm. Causal relations obtain only under given background conditions, and self-organizing dynamical systems exhibit tendencies to rearrange those background conditions so as to preserve the continuity of the overall process, as in the biological shunt mechanisms discussed by Monod (1971), Kauffman (1993), Ellis (1986, 1995), and Ellis and Newton (1998). Merleau-Ponty was ahead of his time in wanting to integrate neuroscience and phenomenology within one coherent understanding of consciousness. It was inevitable that, sooner or later, the brain sciences would also have to take phenomenology seriously, because it was impossible to explain the phenomenon of consciousness from within a theoretical perspective so constrained by the natural attitude that nothing can be seen but the objective, so that the phenomenon of consciousness itself cannot enter the epistemological picture. At the point where consciousness intersects with brain function, the data of experience have remained incoherent with the fact of experience itself, and a reconciliation is necessary. Neurophysiological studies by Carl Aurell (1989), Michael Posner and Mary Rothbart (1992), Antonio Damasio (1994), and Alexander Luria in a neglected but important aspect of his work (1980), focusing on the attentiondirecting role of the emotionally-inuenced frontal-limbic system, show increasingly that information processing takes place by means of completely different brain activities depending on whether it occurs on a conscious or non-conscious basis. Consciousness occurs only when efferent (outowing) nervous activity takes the lead in selecting and directing afferent (inowing) activity; conscious beings are self-organizing emotionally and motivationally directed beings that actively direct their attention, and can imagine things with no afferent input (with neural substrates remarkably similar to the imaging

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

37

activities in perceptual consciousness see John Richardsons (1991) collection of studies, for example). Consistent with Jean-Paul Sartres suggestion in The Psychology of the Imagination (1966), subjects form perceptual imagery largely as a result of formulating their own questions about reality rather than just passively reacting to stimulus-response mechanisms. This point is discussed extensively by Ellis (1990, 1995), Christina Sch es (1994), and Ellis u and Newton (1998). These realizations about differences between purposeful conscious organisms and passive information-processing machines has led to a new (very recent) emphasis on motivation and emotion as playing some as yet poorly understood role in directing attention, conjuring imagery, and facilitating the unique features of conscious beings just listed. For example, Richard Cytowic (1993), Douglas Watt (1998), and Bill Faw (2000) have explored the neurophysiology of the way the emotional directing of the attentional process is a prerequisite for perceptual inputs to register as intentional contents. Thus the traditional stimulus-response model, in which most of the causal work is done by stimulus inputs and other mechanical computations, is backward again as Merleau-Ponty suggested in The Structure of Behavior (1942/1967). The organism must rst act, and then consciousness of the environment results. This basic shift in the direction of causation can be called the enactive view of the mind, to use the term coined by Varela et al. (1991/1993). During the generation after Merleau-Pontys death, his self-organizational approach, with its emphasis on grounding cognition in the bodys motility, was largely abandoned by neuroscientists and cognitive psychologists in favor of a mechanical and reductionistic framework. Information processing was now viewed as a passive receiving of input from the environment rather than as an understanding based on the action affordances of the environment. Consciousness was regarded as a nal step in the processing stream, a causally irrelevant spinoff or byproduct, like the efuent waste given off by a chemical manufacturing plant. Philosophers of mind and cognitive theorists became obsessed with the computer metaphor and with an insistence on reducing the mental to something scientically (and physically) explainable. As is now well known, the resulting computational model of mind viewed consciousness as merely an epiphenomenon of unconscious computational processes in the brain. For a generation of traditionally oriented neurophilosophers and scientists, the attempt to understand those aspects of experiential systems such as human minds that are not analogous to computer functioning, or to billiard-ball mechanical systems, got swept under the rug. Importantly, the organic was reduced to the inorganic, and this was generally taken to mean that the self-organizational was reduced to a mechanical type of causation, in which nothing moves unless acted upon by an external force.

38

R. D. ELLIS

Phenomenologists were skeptical of the supposedly mechanical aspects of the non-conscious substrates emphasized by computationalists, and therefore shied away from neuroscience altogether. But the vast phenomenon of consciousness itself was too important for psychology to ignore, and thus was bound to re-emerge sooner or later. In order to do so, it had to be understood in a new way a way not dominated by a mechanical billiard-ball conceptualization, with its clunky attempts to accommodate the problem of intentional representation within a naive empiricist epistemology (and consequently incommensurable languages describing the subjective and objective dimensions). So for the rst time in over half a century, neurophysiology and cognitive psychology are again becoming phenomenology-friendly. The new approach has arisen from a specic rejection of the old neomodernist metaphysical assumptions that increasingly had led to the occlusion of consciousness from philosophy and science: the assumption that objects (which have real causal power) are clearly distinct from subjects (which are epiphenomenal readouts and can only observe the underlying mechanisms); that the reality which ultimately must explain mental functioning is at bottom an atomistic-reductionism; that representational conscious activities (thoughts and perceptions) are clearly distinguishable from nonrepresentational ones (feelings, volitions, intentional actions, and emotions); and, perhaps most important, that all reality is fundamentally reactive and passive rather than activei.e., that nothing does anything unless caused to do it by some external force acting on it, that there is no such thing as a pattern of activity that can organize its own substrata rather than the other way around. For the resulting ultra-modernist metaphysics, there was no important or non-arbitrary distinction between non-living things and living ones (i.e., those which appropriate, rearrange, and reproduce the needed substrata in order to maintain a higher-order pattern of activity); yet the difference between conscious beings and non-conscious ones (for example, computers) hinges crucially on this distinction. The atomistic-reductionist project eventually led to Kuhnian anomalies that have attracted a great deal of attention in recent mainstream neuroscientic and cognitive psychology discussions. I shall focus here on three anomalies that arise for modernist attempts to deal with consciousness, necessitating a new approach to consciousness not merely a deletion of the concepts of consciousness and subjectivity from the philosophical and scientic vocabulary: 1. Consciousness is an enacting of rather than a passive reaction to the physical events that serve as its substratum; neither is it the non-physical half of an ontological dualism. 2. Mechanistic causes at the empirically observable level seem to underexplain consciousness because, as David Chalmers (1995) points out, even if we explain all the physical correlates of consciousness, we still would

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

39

not, at that point, have explained why those physical mechanisms could not have occurred in the absence of consciousness. This is Chalmerss now-famous hard problem of consciousness. 3. Mechanistic causes also seem to overexplain consciousness, in the sense that they provide necessary and sufcient physical antecedents for any given event, so that no causal power seems to be left for consciousness; yet we know that conscious intentions do play a role in bringing about many movements of our bodies. The mechanistic causes thus explain so much that there seems to be no room left for mental causation. Modernisms best attempt to avoid this anomaly was the thesis of psychophysical identity, which could have allowed consciousness to have the same causal powers as the physical mechanisms with which it supposedly is equivalent. But this account eventually faltered for a reason long understood by phenomenologists: it is impossible to know what a state of consciousness is like merely by knowing everything that can be known empirically about its underlying physical mechanisms. Yet, as Frank Jackson (1986) points out, if there is something about consciousness that cannot be known through empirical observation, then this fact seems to raise doubts about how consciousness could be equivalent with something that is completely empirically observablethe underlying brain mechanisms. In the three sections that follow, I will outline the way these three anomalies have played themselves out in recent neuroscientic and cognitive theory discussions. In each case, we can see not only why the anomalies in question lead inexorably to a new approach that is perfectly compatible with the insights of phenomenology, particularly those of Merleau-Ponty; we can also go so far as to say that the new trend actually places great emphasis on the need for serious phenomenological work.

1. The Anomaly of the Non-passivity of Conscious Attention: Enactivist Analyses of Intentional Actions In the mechanistic framework of Twentieth Century cognitive neuroscience, consciousness was supposed to be caused by, or to result from, something that happened in the brain. Perceptual consciousness, for example, was supposed to result from stimulation of the occipital lobe visual areas, which in turn resulted from stimulation of the optic nerve by incoming sensory data (i.e., patterns of light). But this appendage theory, as Thomas Natsoulas (1993) has called it this notion that consciousness is a byproduct of a physical cause and effect mechanism (in which consciousness itself is an effect but does not act as one of the causes) has led to certain anomalies. For example, when the occipital lobe is activated by incoming visual data, there is no perceptual

40

R. D. ELLIS

consciousness of the object until the parietal and frontal lobes are active, as shown by Martha Farah (1989), Luria (1980), and Posner (1990). Yet Carl Aurell (1989), Sverker Runeson (1974), Richard Srebro (1985), and McHugh and Bahill (1985) show that the activation of the parietal and frontal lobes is not caused by the activity of the occipital lobe. Instead, what happens is that, prior to occipital processing of the visual stimulus, the very act of paying attention in order to see what is there has already been activated by the midbrain and limbic system, which subserve emotional-motivational activity. Panksepps (1998) and Damasios (1999) neuropsychological studies strongly suggest that this activity is self-generated and self-energizing, and can be triggered by the stimulus only if the stimulus is already felt as possibly emotionally important for the organisms purposes. The needs of the organism as a whole must rst motivate the paying of attention to kinds of environmental stimuli that might be important for the organisms purposes, before we have even visually processed the object. At this point, as Damasio et al. have now shown (2000), the frontal lobe becomes active, and in turn prompts the parietal lobe to execute the activities that subserve vague images and/or concepts of the kinds of emotionally important objects that might be present in the environment. Aurell (1989) shows that when a novel, completely unexpected stimulus is presented, complete activation of the traditionally understood sensory areas of the occipital and temporal lobes is not sufcient to produce consciousness of the visual object. According to Aurell, the subject is not conscious of the object until an event-related electrical potential can be observed in the parietal lobe, at about 300 ms. after presentation of the stimulus. We know that this parietal activity correlates with imagistic consciousness because Aurells (1989) 300 ms. evoked potential occurs at the point when the subject is visually aware of the unexpectedly presented stimulus. This 300 ms. parietal activity, however, is not activated in response to the occipital lobes prior activity; instead, the previously activated emotional brain areas (which are activated as early as 20 ms.) have already directed the frontal and parietal lobes to seek out emotionally important categories of objects. This looking for activity has already begun (including sensorimotor imagery associated with possible action affordances) prior to any occipital perceptual area having any effect on our perceptual consciousness. The overall picture of the perceptual process emerging from these observations is not consistent with a simple stimulus-response framework. Instead, we must say that if and when the emotional and anticipatory brain activity, having rst been developed prior to visual processing, then nds itself resonating with patterns of activity in the occipital lobe (which reects sensory stimulation) only then does perceptual consciousness occur. As noted above, Richardson (1991) and others have shown that the brain substrates of mental imagery are almost the same as for an actual perception.

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

41

The main difference is that, in the case of a mere mental image, the parietal and occipital activity is intentionally activated by means of efferent activity (i.e., nervous impulses that ow outward, away from the brains self-energizing action centers). What is most remarkable, though, is that in the case of an actual perception this same efferent activity must also take place, intentionally activated by the organism via motivated attentional activity. But in addition, the efferent activity is experienced as resonating with correlated afferent input. The important point for our purposes is that perceptual consciousness requires self-initiated efferent activity just as much as mere mental imagery does. So we cannot view perception as caused simply by afferent input from the environment. As Merleau-Ponty said, we must [rst] look in order to see (1941/1962: 232, italics added). The looking is orchestrated by the organisms self-organizational activity, motivated internally by its own organismic purposes, not linearly caused by the incoming perceptual signal. Those same internal processes can create vivid imagery in the absence of any input whatever. Perceptual imagery is a result of the organism gearing itself up for an anticipated input not a result of stimulation by environmental objects. This point is clearly illustrated in recent perceptual experiments by Mack and Rock (1998). In these experiments, a perceptual object is presented to the very center of the visual eld, yet when the subjects attention is occupied with an unrelated task, they systematically fail to consciously see the presented stimulus. In most cases, they also lack even implicit knowledge of the presented stimulus (i.e., the ability to answer questions about what they have seen on a better-than-chance basis). Mack and Rock therefore conclude that attention is a prerequisite for perception, not a result of it. So the model of the mind as a passive receiver of causal work done by stimulus inputs and other mechanical computations places the cart before the horse. The organism must rst purposely act, and only then can consciousness of the environment result. It is this fundamental shift in the direction of causation which is now sometimes referred to as the enactive view of the mind. Rather than a stimulus causing a response, it is the response which must occur rst, and then act on the incoming afferent signals to produce a stimulus. Perhaps the clearest and most thoroughgoing expression of this kind of theory has been developed by Natika Newton (1982, 1989, 1991, 1992, 1993, 1996, 2000, 2001). In Newtons action theory of understanding, a perceptual consciousness is always preceded by an act of imagination similar to Edmund Husserls meaning fulllment discussed in the fth of the Logical Investigations (1913/1972) which creates a subjunctive based on a motivated act of action planning. This plan of action creates expectations as to environmental feedback; in turn, the expectations whether fullled or not constitute mental images of a subjunctive nature. If the expectations are fullled as expected, the result is that rather than a mere mental image, we

42

R. D. ELLIS

experience a perceptual image of something actually present. The interesting point for our purposes is that the expectation must precede the impact of the incoming sensory data. This means that subjunctive ideas (again, similar to Husserls meaning intentions) are prior to perceptual input, and action planning guides the process of looking for instantiations of the subjunctive category (for example, the image) as actually instantiated in the environment. Newton is a prime example of a vanguard of current cognitive theorists and neuroscientists who believe that consciousness plays an active partwhat Marcelo Dascal (1987) calls a pragmatic partin bringing about many kinds of information processing; consciousness is not just an epiphenomenon or appendage to a basically non-conscious computational process. If we understand and identify objects by imagining how our bodies could act in relation to them, then action planning grounds our understanding of objects, and ultimately of language, concepts, and even logical relations. For an infant, objects afford sucking, afford throwing, etc. Similarly, when adults anticipate how we might act in relation to an object or situation, we execute the rudiments of a subjunctive conceptualization. For example, to anticipate that If I throw a ball at something (under appropriate circumstances) it will knock it over, is very similar (linguistically, neurophysiologically, and phenomenologically) to believing that If I were to throw a ball, it would knock something over. Anticipations of the future ground our understanding of subjunctives, and thus, of abstract concepts. In Newtons approach, the key to this foundation of understanding is the process of action planning. To make this case, Newton relies on extensive neuroscientic evidencefor example, the nding discussed by Masao Ito (1993), by Damasio (1994), and by Jeremy Schmahmann (1997; see also Schmahmann et al., 2001) that the brain mechanisms underlying abstract thought are almost identical to those underlying action-planning in the context of body movement. In this approach, each of the modernist biases mentioned above is avoided: (1) Because the organism must anticipate actions toward its environment in order for consciousness to occur, consciousness is not merely passively caused by incoming stimuli or unconscious computations performed on incoming stimuli. The bodys organization of stimuli occurs prior to the reception of the stimuli, and if the body does not actively seek to appropriate and rearrange the physiological substrata for its own desired patterns of conscious activity, this consciousness can never occur. Since consciousness is a higher-order process that must actively seek to appropriate and rearrange lower-level processes which are needed as substratum elements for its motivated pattern of activity, such a higherorder process cannot be explained as the causal result of the discrete actions of its own physiological substrates. It would be as misleading to explain consciousness as passively caused by the discrete mechanical

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

43

interaction of particles of brain matter as it would be to explain a sound wave passing through a wooden door as being caused by the actions of the particles of wood in the door. Instead, it is the sound wave, originating elsewhere, that causes the particles to vibrate in the pattern they do a fact we would overlook if we were to content ourselves with explaining the pattern of the wave as being caused by the discrete movements of its substratum elements. Consciousness is disanalogous to the sound wave in that it originates mainly internally; but the analogy is that, as in a sound wave, the wave can spread to affect the particles needed to transmit the wave. (2) A second atomistic and ultimately ultra-modernist assumption that is rejected by the enactive approach is the notion that consciousness plays no signicant role in information processing i.e., the epiphenomenalist argument that consciousness is merely a causally-powerless readout of unconscious computational brain processes. In the enactivist view, consciousness directs much of this activity, even though consciousness itself is embodied (not in computational cerebral processes, but rather in emotional and motivational activities of the whole organism). This emotionally motivated process of action planning that determines the focus of attention is not a computer-like computational process. (3) This implies the rejection of still another set of modernist biases the presumption that representational states (thoughts and perceptions) are clearly distinguishable from non-representational ones (feelings and emotions), and the corollary presumption that subject and object are clearly distinct. The emotional purposes of the whole embodied organism direct conscious attention, which in turn inuences in a necessary way what we perceive and think. We can be consciously aware of this whole process through proprioception, and much (if not all) rational processing results from what Newton calls proprioceptive imagery. I.e., we proprioceptively imagine what it would be like to throw a ball (when forming a subjunctive concept of such an event), or to move our bodies rhythmically (for example, to the rhythm of a certain pattern of logical inference). But proprioceptive imagery is directed toward something that is neither clearly subject nor clearly object my embodied self. An enactive approach to consciousness leads to very different analyses of the relation between physical causation and conscious intention from any that were possible in the ultra-mechanistic atomistic-reductionism of the recent past, which viewed reality as fundamentally reactive rather than consisting of patterns of activity that appropriate their atomistic components. Beyond the one just discussed, the failure of the modernist conception of causation as completely passive led to still another anomaly in the philosophy of mind, whose solution also requires a rejection of this atomistic conception of the nature of

44

R. D. ELLIS

causation. This is the anomaly expressed by Chalmerss hard problem, as discussed in the following section.

2. The Anomaly of Physicalistic Underexplanation: Chalmers Hard Problem The new self-organizational, enactive and embodied approach to cognitive theory also attempts to offer a new perspective on Chalmerss hard problem of consciousness (1995). The dilemma as formulated by Chalmers is very similar to G.E. Moores open question argument in ethical theory (1900/1956). According to Moore, if someone wants to dene morally right to mean productive of such and such consequences (for example, pleasure or happiness), the question can meaningfully be posed, Is it morally right to produce such and such consequences? The very fact that such a question can be meaningfully asked shows that morally right cannot be equivalent in meaning with productive of such and such consequences. Similarly, against any given physical explanation of consciousness, Chalmers points out that the question can always be meaningfully asked, But isnt it conceivable that all the elements in that explanation could occur, resulting in all the same information processing outcomes that would be produced in a conscious process, but in the absence of consciousness? For example, computationalists have maintained for the past 30 years or so that consciousness can be explained either as an epiphenomenon of, or as identical with, a digital computer-like process that uses the hardware of the brain to process its software. But, says Chalmers, we can easily imagine such a computational process as occurring in the absence of consciousness. Therefore, some further explanation is required in order to understand why consciousness does in fact accompany such computational processes in certain cases (for example, in human organisms). Just as in Moores open question argument, here too, the dilemma cannot be escaped simply by dening consciousness as such-and-such by arbitrary at (any more than we can dene morally right by arbitrary at as productive of pleasure). For example, we cannot arbitrarily dene consciousness as a linguistic processing system whose outputs resemble sentences in the English language, and which follows the principles of logic as contained in Copis logic textbook. The problem here would be the same as with Moores open question: We could always ask, Yes, but is it not conceivable that a physical system could process information according to the rules of logic and the English language, without being accompanied by consciousness? If consciousness is not to be dened by arbitrary at, then how do we dene what it is that we are trying to explain when we try to explain consciousness? Before asking this question, we must have a notion of what we mean by consciousness as it occurs in our question. And, of course, anyone

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

45

capable of formulating such a question does know what is meant by consciousness, because, as Eugene Gendlin makes so clear, this person would have to experience his or her own consciousness in order to know what question he or she is trying to formulate (1992). Other than assuming such direct experiencing of ones own consciousness, there would seem to be no way of letting anyone know what we mean by the word consciousness, since the stating of any denition presupposes that the hearer of the denition knows what it is to be conscious of something (in this case, the meanings of the elements in the proposed denition). Part of what we want to address in any explanation of consciousness, then, is the phenomenal experience of consciousness, as opposed to a denition by arbitrary at (for example, in terms of information processing). Correlatively, part of what makes Chalmerss open question so difcult is that, if we were to arbitrarily stipulate some non-phenomenal (i.e., physicalistic) denition of consciousness, it would be easy to imagine that the proposed physical process might have occurred without consciousness as phenomenally experienced, and therefore the latter is not really adequately explained by the physical explanation being proposed. Some further explanation seems required as to why this particular physical process could not have occurred without being accompanied by the phenomenal experience of consciousness. At the same time, if there is to be any hope of seriously addressing Chalmerss open question, the denition of consciousness must not only be framed in phenomenally experienceable terms; it must also be broken down into specic enough elements that these elements can be correlated with physiological substrata which, in the nal analysis, will turn out to be unimaginable without being accompanied by the corresponding elements of consciousness. It cannot be enough merely to say that consciousness is simply indenable except through a direct experience of it. If the dualism that plagued modernism is to be avoided, we must identify elements which, on the one hand, are necessary for the phenomenal experience of consciousness, and on the other hand, can be bridged to the empirically observable world. The enactive approach we have been discussing meets these requirements, because on the one hand it characterizes consciousness not by arbitrary at but as phenomenally experienceable; while on the other hand the elements of the description lend themselves to being correlated with empirically observable physiological substrata, so that at the end of the day it should be impossible to imagine this particular combination of physiological substrata as being unaccompanied by its conscious correlates. The enactive view of consciousness has been characterized by Ellis and Newton as follows: Conscious experience (by contrast to unconscious information processing) entails an emotionally interested anticipation of possible sensory and proprioceptive input such that the pattern of the subjects interest determines

46

R. D. ELLIS

the modality, patterns, and emotional signicance of the anticipated input. Specically, the anticipation takes the form of a sensorimotor, proprioceptive and affective image of a state of affairs looked for by the subject. . . . The content of consciousness is vivid to the extent that the activity constitutive of the interest in the future resonates (in terms of holistic patterns of activity) with the activity of incoming (afferent) imagistic data, and with the activation of memories of past imagistic and conceptual data. (1998: 432) The central idea is that consciousness is an anticipation of a possible input. This point is illustrated by the experience of subjects in perceptual experiments who are instructed to imagine an object prior to its appearance on a screen, or to continue looking for the object while other objects are being ashed intermittently: consistently, the result is that the subjects perceive the object more readily when they are looking for it (Corbetta et al., 1990; Pardo et al., 1990; Logan, 1980; Hanze and Hesse, 1993; Legrenzi et al., 1993; Rhodes and Tremewan, 1993; Lavy and van den Hout, 1994). To imagine an object is to be on the lookout for it. Thus, to form a mental image of a wall as blue means to look for or to anticipate blue in the wall. If we imagine a pink wall at which we are actually looking as blue, we are putting ourselves into a state of readiness to see blue if it should occur. As Merleau-Ponty says, I give ear, or look, in the expectation of a sensation, and suddenly the sensible takes possession of my ear or my gaze, and I surrender a part of my body, even my whole body, to this particular manner of vibrating and lling space known as blue or red (1941/1962: 212). Later he says, The warmth which I feel when I read the word warm is not an actual warmth. It is simply my body which prepares itself for heat and which, so to speak, roughs out its outline (1941/1962: 236). We have already seen that abstract thought involves anticipation as much as does consciousness of sensory or perceptual imagery. To anticipate that if I throw a ball at something it will knock it over is similar to believing that if I were to throw a ball, it would knock something over. Interested anticipations of the future ground our understanding of subjunctives and thus of abstract concepts at the most basic level of phenomenal experiencing. By interested anticipation, enactivists mean one that is emotionally motivated. This is consistent with Richard Cytowics suggestion that the main difference between conscious information processing and processing of nutsand-bolts computers is this emotionally interested anticipation, which computers (and indeed all non-biological systems) lack (1993). We are conscious of incoming afferent data only to the extent that we actively pay attention to them, and this process of directing attention is motivated by the needs of the organism. From an empirical standpoint, afferent processing for example, in the occipital lobe never results in conscious awareness of the object unless accompanied by frontal-limbic and parietal activity instigated by midbrain

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

47

motivational activity. In the enactive approach, the primary organismic need that motivates consciousness of objects is the need to anticipate future data which are considered important for the organisms purposes. The above characterization of conscious experience emphasizes that the emotionally motivated anticipation of input leads to imagery. By image, of course, I do not mean a physical replica of some object, but rather the phenomenal sense that one is looking for (or listening for, tasting for, proprioceptively feeling for, etc.) some object or state of affairs that would take the form of an intentional object. The enactive characterization of consciousness, taken from our phenomenal experience of it, can be broken down into elements which themselves can be studied in empirical terms, but which, when they interact in a certain way, cannot be imagined as interacting in that way without also being accompanied by consciousness. These elements, essentially, are (1) an emotional motivation which grounds an interest in anticipating the future; (2) sensory, sensorimotor or proprioceptive imagery activated by this emotional motivation; and (3) a resonating between the activity of emotionally-motivated imagery and the activity stimulated by incoming sensory data and data reactivated through memory. If consciousness is characterizable as a certain kind of interaction of these elements, then the corresponding interaction of the patterns of activity in the physiological correlates of these conscious processes will be unimaginable without being accompanied by those conscious processes themselves. An enactive characterization of consciousness thus makes possible a resolution of the hard problem, because it bridges from the phenomenal level to the empirical-scientic level in such a way that the empirically observable elements could not imaginably relate in just that way without being accompanied by consciousness. But the way of relating at issue here is precisely one in which emotion and motivation actively drive the computational process, rather than arising as a passive reaction to it. Consciousness, which inevitably includes an emotional element as part of the process of attentive awareness (an emotional element which is constitutive of the very felt nature of conscious as opposed to unconscious processing), is a higher-order process which actively appropriates, replaces, and rearranges the physical substratum elements needed to maintain and enhance the pattern of its own process. It therefore cannot be that the pattern which is consciousness is passively a causal result of the actions of those substratum elements. But this in turn requires rejecting the same modernist assumptions that were called into question above, and for analogous reasons: Consciousness (subjectivity) is not caused by micro-level billiard ball mechanisms, but also is not separable from them. Neither dualism nor causal epiphenomenalism can resolve the hard problem, and psychophysical identity requires ignoring the difference between the phenomenal content of experience and its empirically observable correlates. Only an enactive approach can coherently account for

48

R. D. ELLIS

consciousness as a process that is inseparable from its substrata, because consciousness is the pattern of the activity of its substrata, yet is not passively caused by the actions of those substrata. If the process, consciousness, is inseparable from its embodiment, yet its character is not passively caused by the nature of the bodily elements per se, then in more general terms there are processes in nature that actively appropriate their substrata rather than being passive results of them. And this requires rejecting the modernist assumption that natural processes do not act, but only react i.e., that nothing ever happens except as a passive reaction to some external force, that all reality is fundamentally passive. 3. The Anomaly of Physical Overexplanation Although Chalmers points out that explanations of empirical properties are not sufcient to explain why the empirical properties are accompanied by phenomenal consciousness, it is equally true that the empirical properties as understood in mainstream cognitive psychology explain too much. I.e., if we accept the notion that one set of neurophysiological properties is the necessary and sufcient cause of some subsequent set of neurophysiological properties, then there can be no causal role for the corresponding conscious intentions. But we can easily observe that a conscious intention does play a causal role, because the conscious decision to raise my hand does play a part in bringing it about that the hand goes up. Given the modernist approach to mechanical explanation, in which the empirically-observed level constitutes a sufcient causal chain, a process, such as consciousness, cannot appropriate and use its own substratum elements, so consciousness remains an irrelevant epiphenomenon that can play no causal role in physiological processes, including the computational processes of the brain. Physical explanations thus explain too much, in the sense that nothing is left to be explained by conscious intentions. Without an enactive approach, in which consciousness is a process which takes physiological events as its substrata, there can be no solution to this problem of overexplanation. But if consciousness and physiology relate as a higher-order process relates to its own substrata, and such that the higher-order process is not merely caused by or equivalent to is own substratum elements, then physicalistic overexplanation ceases to be a problem. Suppose C1 and C2 are two conscious states, and that P1 and P2 are the physiological correlates of these conscious states. C1 C2 P1 P2 In the modernist-mechanistic approach, if P1 was necessary and sufcient to bring about P2 (under the given circumstances), then nothing else could be

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

49

either necessary or sufcient to bring about P2. Thus C1 could have no causal power to bring about P2. So, if C1 was the conscious intention to raise my hand, and P2 was the movement of the hand, it was necessary to say that the intention to raise the hand really played no role in the raising of the hand. Psychophysical identity could solve this problem only by creating a worse one. Of course, if C1 and P1 were the same thing as each other, then C1 and P1 could both be both necessary and sufcient to produce the same outcome. But C1 and P1 are not precisely the same thing as each other, because if they were, then complete knowledge of P1 would yield complete knowledge of C1, whereas it doesnt. No amount of empirical knowledge and explanation of a headache can reveal to someone what it feels like to have a headache, unless that observer has also experienced something like a headache in his or her own consciousness. Nor could causal epiphenomenalism solve the problem. Epiphenomenalism would simply say that C1 is caused by P1, and C2 is caused by P2, and neither C1 nor C2 causes anything. But, in the rst place, if P1 causes C1, then P1 and C1 cannot be the same thing as each other; so the question arises as to what sort of entity C1 is if it is to be distinguished from a physical entity. Epiphenomenalism seems inevitably to lead to a metaphysical dualism. Moreover, it does not solve the problem, but only bites the bullet; it does not explain how my intention to raise my hand leads to the raising of the hand, but simply denies that it does lead to it. Even this consequence could be avoided by an epiphenomenalist theory if it were plausible to posit that there is some little bit of matter in the brain whose only purpose is to serve as the substratum for consciousness, that this little bit of matter is caused to behave in the ways that correspond to conscious experience, and that it does not in turn have any effect on any other physical processes in the brain. But this would be inconsistent with what we have learned about the neurophysiology of consciousness. What the empirical evidence points to is that processing occurs in a conscious way only when it is very globally distributed in the brain. For example, I have already mentioned that, when impulses caused by optic stimulation set up patterns of activity in the occipital lobe, but without coordinated limbic and frontal-cortex activity, no perceptual consciousness results from the occipital activity (Michael, Posner, 1990; Luria, 1980). Similarly, as summarized by Isaac Asimov (1965: 193) and Ellis (1986: 4652), the transition from sleep to waking consciousness requires that the activities of the hypothalamus and cortex achieve a widely-distributed pattern of synchronization or coordination that was not present during sleep. Jonathan Winson (1986: 46ff), Richard Restak (1984: 315333), Richardson (1991), and Jouvet (1967) show that, when we are conscious of dream images during sleep, both efferent and afferent activity throughout the brain are detected, whereas during non-dreaming sleep both the afferent activity and some of the efferent activity are comparatively much

50

R. D. ELLIS

less pronounced. These examples suggest that consciousness requires globally distributed processes in the brain, combining local mechanisms which under different circumstances would be active in various non-conscious processes. Consciousness, then, cannot be conned to some small bit of matter which does not affect any other brain process involved in cognition. But if the physiological substratum of consciousness does affect further physiological and cognitive functioning, then consciousness itself also affects further physiological and cognitive functioning, unless we assume that consciousness is somehow separable from its physiological substratum which again would entail a dualism of physical and non-physical occurrences. Neither dualism, nor psychophysical identity theory, nor epiphenomenalism works as an explanation of the relation between consciousness and its physiological correlates, because the modernist concept of atomisticreductionism does not allow a process to affect the behavior of its own substratum elements. It requires, instead, that a process must be caused by the interaction of the discrete movements of its substratum elements, each of which has a sufcient causal explanation of its own, so that the pattern of consciousness, paradoxically, can have no causal power. But the enactive approach, with its return to Merleau-Pontys psychophysical forms, does avoid this problem of causal overexplanation. It avoids it by allowing that a process can have causal power. In the case of the conscious states C1 and C2, and their physical correlates, P1 and P2, the enactive approach can allow that P1 is necessary and sufcient for P2 (under the given circumstances), while at the same time maintaining that C1 can also be necessary and/or sufcient for C2 and for P2. The reason is that, if C1 and P1 relate as process to substratum, then C1 and P1 are inseparable from each other in the sense that they are necessary and sufcient for each other. If two events are necessary and sufcient for each other, then even if one does not cause the other, and even if one is not identical with the other, still, one of these events will be necessary and sufcient for whatever the other is necessary and sufcient for. Consider, for example, three dominos lined up in such a way that if one domino falls, it will knock over the other two. Under these given circumstances, the falling of the second domino is necessary and sufcient for the third dominos falling, but the two are by no means identical, nor is one caused by the other. Instead, the two dominos falling are events that, under the given circumstances, are inseparable from each other. Whatever is necessary and sufcient for one will be necessary and sufcient for the other. The relation between a process and its substratum elements works out in a similar way. Since a process is inseparable, under the given circumstances, from the behavior of its substratum elements, then the process will also be necessary and sufcient for whatever its substratum elements are necessary and sufcient for. Yet this does not necessarily imply that the process is caused by its substratum, or that it is identical with it. Many things are true of a

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

51

process that are not true of its substratum elements, even taken collectively. For example, a wave on the ocean may travel many miles in a horizontal direction, while its substratum elements, the movements of particles of water, are very small vertical oscillations. The process-substratum relation in the case of consciousness is different from the relationship between a wave and the physical medium through which the wave passes in one crucial respect. Consciousness, unlike a sound wave or a wave in the ocean, is a purpose-directed process. Merleau-Ponty, as mentioned earlier, denes a purposeful activity as one in which the organisms overall pattern of activity acts in such a way as to rearrange and readjust its various parts in order to maintain or enhance the overall pattern (1942/1967: 47ff). Purely mechanical processes do not seem to behave in this way. A thermostat, while it will adjust its overall pattern to feedback from the environment, does not seem to be a purpose-directed system because, when one of its parts ceases to function or is removed, the thermostat does not act in such a way as to replace the missing part or try to compensate for its absence; it simply quits functioning. The thermostat does not care, in this non-conscious sense of care, whether it achieves its ultimate objective or not. It functions or not purely as an additive juxtaposition of the functioning of its parts. It becomes increasingly clear, as we study the brain, the ecosystem, and the concept of living organisms in biology, that at least many patterns of activity maintain their organizational structure across replacements of their own substrata. As Merleau-Ponty suggests, an organism will often rearrange the overall conguration of its parts if an imbalance is created in one part which disrupts the functioning of the whole. One of Merleau-Pontys examples of this top-down organizational structure in organisms is the development of the pseudo-fovea in cases of hemianopsia. In these cases, the eyes change the functioning of the cones and rods from their original anatomical programming. In hemianopsia, the subject is rendered blind in half of each retina, so that he or she now has the use of only two half retinas. Consequently one would expect that his eld of vision would correspond to half of the normal eld of vision, right or left according to the case, with a zone of clear peripheral vision. In reality this is not the case at all: the subject has the impression of seeing poorly, but not of being reduced to half a visual eld. The organism has adapted itself to the situation created by the illness by reorganizing the functions of the eye. The eyeballs have oscillated in such a way as to present a part of the retina which is intact to the luminous excitations, whether they come from the right or the left; in other words, the preserved retinal sector has established itself in a central position in the orbit instead of remaining affected, as before the illness, by the reception of light rays coming from one half of the eld. But the reorganization of muscular functioning, which is comparable to what we encountered in the xation reex, would be of no effect if it were not accompanied by a redistribution of functions in the retinal and calcarine

52

R. D. ELLIS

elements which certainly seem to correspond point for point to the latter. (1942/1967: 4041) Merleau-Ponty concludes that if we adhere to the classical conceptions which relate the perceptual functions of each point of the retina to its anatomical structure for example, to the proportion of cones and rods which are located there the functional reorganization in hemianopsia is not comprehensible (41). In living organisms, the whole readjusts the functioning of some of its parts when other parts are disrupted, in order to maintain the original function of the whole. Other examples of self-directed neurophysiological reorganization following localized brain injury or trauma can be found in Restak (1984: 360ff). Eric Kandel and James Schwartz (1981) report similar ndings: they nd that if brain cells of an embryo are transplanted to a different region of another embryo, they are transformed into cells appropriate to that region. The Twentieth Century philosophy of mind, at least until very recently, had made every effort to remain tenaciously bottom-up (in the epiphenomalist sense discussed earlier). Cognitive functions had been explained as responses to incoming stimuli, with the stimuli combining in complex ways to mechanically cause the response. The response supposedly was a purely passive change, brought about by the stimulus. As in the characteristic Twentieth Century approach to natural science, here too the only inertia was to be an inertia of passivity; nothing would move or change unless acted upon by an outside force. In order to overcome the problems I have just outlined, an adequate conception of the mind-body relation must reopen these questions with regard to ontology and the theory of causation. We must develop a theory in which purposeful processes are able to appropriate their needed substratum elements, rather than merely being passive epiphenomena of them or ontologically identical with them. This in turn will require the development of a workable account of how it is that certain activities can be purposeful in a scientically intelligible universe. The Twentieth Century tried simply to turn its back on this problem. Purposeful activity was explained away as a purely mechanical process that only appeared, anthropomorphically, as if it were purposeful. The standard explanation was that we view a mechanical process as if it were purposeful because we view it as if it were conscious, like ourselves, and we imagine that if we were to engage in that activity, we would be doing so with the consciousness of some purpose in mind. But to characterize a process as purposeful is not to anthropomorphize. The human organism was purposeful before it was conscious. Consciousness is not necessary to purposefulness, even in the human organism. So purposefulness cannot be explained simply as the addition of consciousness to a process which otherwise could be explained simply as one that displays certain tendencies

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

53

to accomplish certain results, as if the only difference between a purposeful and a non-purposeful process were that, in the latter, there is conscious awareness of the tendencies that would be present in any purely mechanical system. Developing such a conception will not be easy. Twentieth Century science provided us with few tools or concepts to serve this kind of exploration. But the other alternative seems to be to eschew any hope of connecting psychology to an adequate understanding of consciousness altogether; and that would be too great a sacrice.

References
Asimov, I. (1965). The Human Brain. New York: Mentor. Aurell, C.G. (1989). Mans Triune Conscious Mind. Perceptual and Motor Skills 68: 747754. Chalmers, D. (1995). Facing up to the Problem of Consciousness. Journal of Consciousness Studies 2: 522. Clark, A. (1997). Being There. Cambridge: MIT Press. Corbetta, M., Meizen, F.M., Dobmeyer, S., Schulman, G.L. and Petersen, S.E. (1990). Selective Attention Modulates Neural Processing of Shape, Color and Velocity in Humans. Science 248: 15561559. Cytowic, R. (1993). The Man Who Tasted Shapes. New York: Warner. Damasio, A. (1994). Descartes Error. New York: Putnam. Damasio, A. (1999). The Feeling of What Happens. New York: Harcourt Brace. Damasio, A., Grabowski, T.J., Bechara, A., Damasio, H., Ponto, L.L. and Parvizi, J. (2000). Subcortical and Cortical Brain Activity During the Feeling of Self-generated Emotions. Nature Neuroscience 3: 10491056. Dascal, M. (1987). Language and Reasoning: Sorting out Sociopragmatic and Psychopragmatic Factors. In J.C. Boudreaux, B.W. Hamill and R. Jernigan (Eds.), The Role of Language in Problem Solving 2. Elsevier: North-Holland, pp. 183197. Davidson, D. (1970). Mental Events. In Lawrence Foster and Joe W. Swanson (Eds.), Experience and Theory. Amherst: University of Massachusetts Press, pp. 79102. Ellis, R.D. (1986). An Ontology of Consciousness. Dordrecht: Kluwer/Martinus Nijhoff. Ellis, R.D. (1990). Afferent-efferent Connections and Neutrality-Modications in Imaginative and Perceptual Consciousness. Man and World 23: 2333. Ellis, R.D. (1995). Questioning Consciousness: The Interplay of Imagery, Cognition and Emotion in the Human Brain. Amsterdam: John Benjamins. Ellis, R.D. (1996). Ray Jackendoffs Phenomenology of Language as a Refutation of the Appendage Theory of Consciousness. Pragmatics & Cognition 4: 125137. Ellis, R.D. (2005). Curious Emotions: Roots of Consciousness and Personality in Motivated Action. Amsterdam: John Benjamins. Ellis, R.D. and Newton, N. (1998). Three Paradoxes of Phenomenal Consciousness: Bridging the Explanatory Gap. Journal of Consciousness Studies 5: 419442. Ellis, R.D. and Newton, N. (Eds.) (2000). The Caldron of Consciousness: Affect, Motivation, and Self-Organization. Amsterdam: John Benjamins. Farah, M. (1989). The Neural Basis of Mental Imagery. Trends in Neuroscience 12: 395399. Faw, B. (2000). Consciousness, Motivation, and Emotion: Biopsychological Reections. In R.D. Ellis and N. Newton (Eds.), The Caldron of Consciousness: Motivation, Affect, and Self-organization. Amsterdam: John Benjamins.

54

R. D. ELLIS

Gallagher, S. and Marcel, A. (1999). The Self in Contextualized Action. Journal of Consciousness Studies 6(4): 430. Gendlin, E. (1962/1997). Experiencing and the Creation of Meaning. Chicago: University of Chicago Press. Gendlin, E. (1992). The Primacy of the Body, Not the Primacy of Perception. Man and World 25: 34153. Hanze, M. and Hesse, F. (1993). Emotional Inuences on Semantic Priming. Cognition and Emotion 7: 195205. Husserl, E. (1913/1972). Logical Investigations. Trans. J.N. Findlay. New York: Humanities Press. Ito, M. (1993). Movement and Thought: Identical Control Mechanisms by the Cerebellum. Trends in the Neurosciences 16 (11): 448450. Jackson, F. (1986). What Mary Didnt Know. Journal of Philosophy 83: 291295. Jouvet, M. (1967). Neurophysiology of the States of Sleep. Physiological Review 47: 117 127. Kandel, E. and Schwartz, J. (1981). Principles of Neural Science. New York: Elsevier-North Holland. Kauffman, S. (1993). The Origins of Order. Oxford: Oxford University Press. Kim, J. (1992). Multiple Realization and the Metaphysics of Reduction. Philosophy and Phenomenological Research 52: 126. Lavy, E. and Van Den Hout, M. (1994). Cognitive Avoidance and Attentional Bias: Causal Relationships. Cognitive Therapy and Research 18: 179194. Legrenzi, P., Girotto, V and Johnson-Laird, P.N. (1993). Focussing in Reasoning and Decision . Making. Cognition 49: 3766. Logan, G.D. (1980). Attention and Automaticity in Stroop and Priming Tasks: Theory and Data. Cognitive Psychology 12: 523553. Luria, A. R. (1980). Higher Cortical Functions in Man, 2nd ed. New York: Basic Books. Mac Cormack, E. and Stamenov, M. (Eds.). (1996). Fractals of Brain, Fractals of Mind. Amsterdam: John Benjamins. Mack, A., and Rock, I. (1998). Inattentional Blindness. Cambridge: MIT/Bradford. McHugh, S.E. and Bahill, AT. (1985). Learning to Track Predictable Target Waveforms Without a Time Delay. Investigative Ophthalmology and Visual Science 26: 932937. Merleau-Ponty, M. (1942/1967). The Structure of Behavior. Trans. A. Fischer. Boston: Beacon. Merleau-Ponty, M. (1941/1962). Phenomenology of Perception. Trans. Colin Smith. New York: Humanities Press. Monod, J. (1971). Chance and Necessity. New York: Random House. Moore, G.E. (1900/1956). Principia Ethica. Cambridge: Cambridge University Press. Natsoulas, T. (1993). What Is Wrong with Appendage Theory of Consciousness. Philosophical Psychology 6: 137154. Newton, N. (1982). Experience and Imagery. Southern Journal of Philosophy 20: 475487. Newton, N. (1989). Visualizing Is Imagining Seeing: a Reply to White. Analysis 49: 7781. Newton, N. (1991). Consciousness, Qualia, and Reentrant Signaling. Behavior and Philosophy 19: 2141. Newton, N. (1992). Dennett on Intrinsic Intentionality. Analysis 52: 1823. Newton, N. (1993). The Sensorimotor Theory of Cognition. Pragmatics and Cognition 1: 267305. Newton, N. (1996). Foundations of Understanding. Amsterdam: John Benjamins. Newton, N. (2000). Conscious Emotion in a Dynamic System: How I Can Know How I Feel. In R. Ellis and N. Newton (Eds.), The Caldron of Consciousness: Motivation, Affect, and Self-organization. Amsterdam: John Benjamins, pp. 91108.

PHENOMENOLOGY-FRIENDLY NEUROSCIENCE

55

Newton, N. (2001). Emergence and the Uniqueness of Consciousness. Journal of Consciousness Studies 8: 4759. Panksepp, J. (1998). Affective Neuroscience. New York: Oxford. Pardo, J.Y., Pardo, P.J., Janer, K.W., and Raichle, M.E. (1990). The Anterior Cingulate Cortex Mediates Processing Selection in the Stroop Attentional Conict Paradigm. Proceedings of the National Academy of Sciences 87: 256259. Posner, M.I. (1990). Hierarchical Distributed Networks in the Neuropsychology of Selective Attention. In A. Caramazza (Ed.), Cognitive Neuropsychology and Neurolinguistics: Advances in Models of Cognitive Function and Impairment. New York: Plenum, pp. 187210. Posner, M.I. and Rothbart, M.K. (1992). Attentional Mechanisms and Conscious Experience. In A.D. Milner and M.D. Rugg (Eds.), The Neuropsychology of Consciousness. London: Academic Press. Restak, R. (1984). The Brain. New York: Bantam. Rhodes, G. and Tremewan, T. (1993). The Simon Then Garfunkel Effect: Semantic Priming, Sensitivity, and the Modularity of Face Recognition. Cognitive Psychology 25: 147187. Richardson, J. (1991). Imagery and the Brain. In Cesare Cornoldi and Mark McDaniel (Eds.), Imagery and Cognition. New York: Springer-Verlag, pp. 146. Runeson, S. (1974). Constant Velocity Not Perceived as Such. Psychological Research 37: 323. Sartre, J.P. (1966). The Psychology of Imagination. New York: Washington Square Press. Schmahmann, J. (Ed.). 1997. The Cerebellum and Cognition. New York: Academic Press. Schmahmann, J., Anderson, C., Newton, N. and Ellis, R.D. (2001). The Function of the Cerebellum in Cognition, Affect and Consciousness: Empirical Support for the Embodied Mind. Consciousness & Emotion 2: 273309. Sch es, C. (1994). The Anonymous Powers of the Habitus. Study Project in the Phenomenology u of the Body Newsletter 7: 1225. Sheets-Johnstone, M. (1999). The Primacy of Movement. Amsterdam: John Benjamins. Srebro, R. (1985). Localization of Visually Evoked Cortical Activity in Humans. Journal of Physiology 360: 233246. Thelen, E. and Smith, L. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge: MIT/Bradford. Varela, F., Thompson, E. and Rosch, E. (1991/1993). The Embodied Mind. Cambridge: the MIT Press. Watt, D. (1998). Affect and the Hard Problem: Neurodevelopmental and Corticolimbic Network Issues. Consciousness Research Abstracts: Toward a Science of Consciousness, Tucson 1998, 9192. Wertz, F.J. (1987). Cognitive Psychology and the Understanding of Perception. Journal of Phenomenological Psychology 18: 103142. Wider, Kathleen (1997). The Bodily Nature of Consciousness. Ithaca: Cornell University Press. Winson, J. (1986). Brain and Psyche. New York: Random House.

Você também pode gostar