Você está na página 1de 4

Treisman's (1964) Attenuation Model

Selective attention requires that stimuli are filtered so that attention is directed. Broadbent's model suggests that the selection of material to attend to (that is, the filtering) is made early, before semantic analysis. Treisman's model retains this early filter which works on physical features of the message only. The crucial difference is that Treisman's filter ATTENUATES rather than eliminates the unattended material. Attenuation is like turning down the volume so that if you have 4 sources of sound in one room (TV, radio, people talking, baby crying) you can turn down or attenuate 3 in order to attend to the fourth. The result is almost the same as turning them off, the unattended material appears lost. But, if a nonattended channel includes your name? for example, there is a chance you will hear it because the material is still there.

Treisman's Attenuation

Treisman agreed with Broadbent that there was a bottleneck, but disagreed with the location. Treisman carried out experiments using the speech shadowing method. Typically, in this method participants are asked to simultaneously repeat aloud speech played into one ear (called the attended ear) whilst another message is spoken to the other ear. In one shadowing experiment, identical messages were presented to two ears but with a slight delay between them. If this delay was too long, then participants did not notice that the same material was played to both ears. When the unattended message was ahead of the shadowed message by upto to 2 seconds, participants noticed the similarity. If it is assumed the unattended material is held in a temporary buffer store, then these results would indicate that the duration of material held in sensory buffer store is about 2 seconds. In an experiment with bilingual participants, Treisman presented the attended message in English and the unattended message in a French translation. When the French version lagged only slightly behind the English version, participants could report that both messages had the same meaning. C1early, then, the unattended message was being processed for meaning and Broadbent's Filter Model, where the filter extracted on the basis of physical characteristics only,

could not explain these findings. The evidence suggests that Broadbent's Filter Model is not adequate, it does not allow for meaning being taken into account. Treisman's ATTENUATION THEORY, in which the unattended message is processed less thoroughly than the attended one, suggests processing of the unattended message is attenuated or reduced to a greater or lesser extent depending on the demands on the limited capacity processing system. Treisman suggested messages are processed in a systematic way, beginning with analysis of physical characteristics, sy11abic pattern, and individual words. After that, grammatical structure and meaning are processed. It will often happen that there is insufficient processing capacity to permit a full analysis of unattended stimuli. In that case, later analyses will be omitted. This theory neatly predicts that it will usually be the physical characteristics of unattended inputs which are remembered rather than their meaning. To be analysed, items have to reach a certain threshold of intensity All the attended/selected material will reach this threshold but only some of the attenuated items. Some items will retain a permanently reduced threshold, for example your own name or words/phrases like 'help' and 'fire'. Other items will have a reduced threshold at a particular moment if they have some relevance to the main attended message.

Evaluation of Treisman's Attenuation Model


1. Treisman's Model overcomes some of the problems associated with Broadbent's Filter Model, e.g. the Attenuation Model can account for the 'Cocktail Party Syndrome'. 2. Treisman's model does not explain how exactly semantic analysis works. 3. The nature of the attenuation process has never been precisely specified. 4. A problem with all dichotic listening experiments is that you can never be sure that the participants have not actually switched attention to the so called unattended channel.

For Methods for Cognitive Modeling, Ch. 1, p. 8 example, considering the prototype model, we would need to make detailed, but somewhat ad hoc, assumptions about what features should be used to represent the stimuli to be categorized. Theorists try to minimize the number of ad hoc assumptions, but this step is often unavoidable. Models almost always contain parameters, or coefficients that are initially unknown, and the third step is to estimate these parameters from some of the observed data. For example, the prototype model may include weight parameters that determine the importance of each feature for the categorization problem. The importance weight assigned to each feature is a free parameter that is estimated from the choice response data (analogous to the problem of estimating regression coefficients in a linear regression model). Theorists try to minimize the number of model parameters, but this is usually a necessary and important step of modeling. The fourth step is to compare the predictions of competing models with respect to their ability to explain the empirical results. It is meaningless to ask if a model can fit the data or not (Roberts & Pashler, 2000). In fact, all models are deliberately constructed to be simple representations that only capture the essentials of the cognitive systems. Thus we know, a priori, that all models are wrong in some details, and a sufficient amount of data will always prove that a model is not true. The question we need to ask is -- which model provides a better representation of the cognitive system that we are trying to represent? For example, we know from the beginning that both the prototype and the exemplar models are wrong in detail, but we want to know which of these two models provide a better explanation of how we categorize objects. Methods for Cognitive Modeling, Ch. 1, p. 9 To empirically test competing models, researchers try to design experimental conditions that lead to opposite qualitative or ordinal predictions from the two models (e.g., the prototype model predicts that stimulus X is categorized in category A most often, but the exemplar model predicts that stimulus X is categorized in category B most often). These qualitative tests are designed to be parameter free in the sense that the models are forced to make these predictions for any value of the free parameters. However, it is not always possible to construct qualitative tests for deciding between the models, and we often need to resort to quantitative tests in which we compare the magnitude of the prediction errors produced by each model. Even if it is possible to construct qualitative tests, it is also informative to examine the quantitative accuracy as well. The last step is often to start all over and reformulate the theoretical framework and construct new models in light of the feedback obtained from new experimental results. Model development and testing is a never-ending process. New experimental findings are discovered all the time, posing new challenges to previous models. Previous models need to be modified or extended to account for these new results, or in some

cases, we need to discard the old models and start all over. Thus the modeling process produces an evolution of models that improve and become more powerful over time as the science in a field progresses.

Você também pode gostar