Você está na página 1de 66

Appendix 1

Ten Steps

A User-Friendly

GUIDEBOOK
for the Ten Steps to Evaluate Programs that Erase the Stigma of Mental Illness

Patrick Corrigan Illinois Institute of Technology

Ten Steps 2 This work was made possible by grants MH62198-01 for the Chicago Consortium of Stigma Research, plus MH66059-01, AA014842-01, and MH08598-01 with P. Corrigan, P.I. All the materials herein solely represent the research and subsequent opinion of the P.I.

Table of Contents
Page 1. Introduction ...................................................................................................3

2. The Role of Research and Program Evaluation ............................................4

3. The Anti-Stigma Program Evaluation Plan (ASPEP-10): Ten Steps ..............7

4. Ten Step Plan Summaries ......................................................................18

5. An Example of an Anti-Stigma Evaluation Plan (ASPEP-10) ......................19

6. Cross Group Differences.................................................................................31

Ten Steps

Chapter 1 Introduction
This Guidebook provides materials that will help most advocates and other stakeholders to carry out a solid and meaningful evaluation of anti-stigma programs. The Guidebook is written for the person who is unfamiliar with research and evaluation strategies and hence is written to be user-friendly. It is a companion to Corrigan, Roe and Tsang, (in review)1, a book that summarizes effective anti-stigma programs. In addition, many of the instruments used as examples in the guidebook are provided in A Toolkit of Measures Meant to Evaluate Programs that Erase the Stigma of Mental Illness (hereafter referred to as the Toolkit). In this Guidebook, we summarize the Ten Steps of an Anti-Stigma Evaluation Plan (ASPEP-10). Central to research endeavors is Community-Based Participatory Research, inclusion of stakeholders in all stages of evaluation including leadership of the evaluation plan. A worksheet representing the plan is provided in Appendix 2A. The text of the Guidebook carefully summarizes the Evaluation Plan in a step-by-step process. In addition to the Guidebook are fidelity and satisfaction instruments that parse out and identify the active and effective ingredients of an anti-stigma program.

Corrigan. P.W., Roe, D., & Tsang, H. (in review). Challenging the Stigma of Mental Illness: Lessons for Advocates and Therapists. London, Wiley. Available from Amazon.com and other online book stores.

Ten Steps

Chapter 2 The Role of Research and Program Evaluation


In many ways, the steps of the ASPEP-10 parallel the elements of more esoteric research: define the question and hypothesis of the study, specify the measure(s) meant to test the hypothesis, complete statistical analyses, and make inferences based on these analyses. This last point is a sit-down moment for most readers; advocates from varied stakeholder groups sitdown and use the data to decide whether the intervention is good, indicating resources should be sought to continue it, or whether it should be discarded for another or amended in various ways. We have put together a worksheet in Appendix 2 that is a guide to this approach; it may seem to be a simple one for sophisticated readers, namely those with some experience in social science. We agree that consultants with mastery of this arena should be recruited for the evaluation, assuming inclusion of the expert does not exhaust evaluation resources. We believe available funds to hire an expert are beyond resources of most advocacy programs. Hence, we developed the user-friendly ASPEP-10 which can be completed by most eager stakeholders. The ASPEP-10 is developed in a simple and step-by-step program which most advocates can understand and implement in absence of a research method expert. Why complete an evaluation of anti-stigma programs? Given the era of evidence-based practices, advocates will want to use the most effective stigma change approaches. The ASPEP-10 not only helps to identify these approaches, but also to unpack and revise effective program components. Community Based Participatory Research Team (CBPR team). CBPR represents principles and practices under which researchers partner with consumers and other stakeholders may conduct good program evaluations. The CBPR team and its members address all the steps in good research: making sense of hypotheses, selecting effective designs, analyzing the findings, making sense of the findings, and, perhaps most importantly, using these findings to enhance the program. Stakeholder is a diverse term that might potentially include many groups: People with mental illness, though this is not a simple group. Including people currently using treatments, those who see themselves as ex-patients (no longer consume services), or survivors (not just surviving the illness, but also the treatment for the disorder). Family members. Often family members and people with the illness have opposing agendas. Families should be included in the CBPR team in cases where they have a relevant interest. Family is also a diverse idea. Often parents of adults with mental illness are recruited, but other relatives may also have important roles including grandparents, siblings, spouses, and children. Service providers often have a relevant interest. Provider is also a diverse term with varying views across disciplines including psychiatry, psychology, social work, and

Ten Steps

psychiatric nursing. Also relevant here may be administrators (who control the purse strings of the anti-stigma program) and government authorities, such as representatives of the state mental health authority, or even legislators. o Inclusion of providers, in particular, highlights a concern of some stakeholders. Psychiatrists and other mental health providers are often seen as part of the problem, not of the solution. Hence, CBPR team members need to decide whether to include people from these groups. In making this decision, we are reminded of the adage: It is better to have an adversary in the tent looking out, than out of the tent peering in! There is one last broad group of stakeholders to consider. In many places we encourage targeted anti-stigma, such as trying to change the prejudice and discrimination of powerful groups such as employers, landlords, health care providers, and members of faith communities. It is wise to include people from these spheres on the CBPR team. Anti-stigma projects seeking to change employer attitudes are significantly more successful when including employers in the development and implementation of the program. Similarly, program evaluation is enhanced with employers on the CBPR team. As mentioned previously, stakeholder is a demographically diverse idea. Factors such as ethnicity, faith-based community, gender, and sexual orientation have all been shown to be relevant to understanding the stigma of mental illness and programs meant to challenge this stigma. Which among these factors should be included in the CBPR team? The team needs to identify at start-up which demographics are relevant and important. For example, the Westside of Chicago is mostly African American; hence African Americans need to have a prominent role in evaluation. Two principles illustrate the significance of CBPR in this evaluation plan: perspective and politic. Dissimilar stakeholder groups vary in comprehension of stigma and stigma change, and in research experiences used to test these PERSPECTIVES. For example, research suggests that interests and goals of people with Western European roots tend to be individualistic when compared to East Asian cultures, where individuals with mental illness are understood in terms of a collective, usually the persons family. Perspectives from these diverse groups need to be included in the evaluation plan. POLITIC: Advocates are the group most likely to consume research findings in order to actually try to erase the stigma. They are most likely to have a sense of key policy issues in local and regional mental health authorities and to use new information about stigma change to affect corresponding legislative activity (e.g., passage of budget and other mental health bills that promote a recovery-oriented system of care) and administrative efforts (e.g., actual, day-today directives that make the vision of a recovery-based system a reality). CBPR team members have a history of interest in and authority with politicians who are likely to respond to constituent efforts: in the case here, a mental health agenda that is undermined by stigma. What exactly do we mean when we say stakeholders are to be real partners in evaluating stigma change programs? At least one, frequently a service consumer, is selected as coPrincipal Investigator and directs all aspects of the project with another co-PI who may have a strong research background. Some people wonder whether this is political correctness,

Ten Steps

questioning whether the consumer co-PI is just a token. Some CBPR teams provide training and practical information about research methods and the decisions needed to better inform team members. Fundamental to evaluation are hypotheses recognizing the priorities and possibilities that define real world stigma change. Consumer or family stakeholders are often more familiar with this arena than the researcher members of the team.

Ten Steps

Chapter 3 The Anti-Stigma Program Evaluation Plan (ASPEP-10):Ten Steps


The ASPEP-10 is a step-wise, user-friendly approach to evaluating anti-stigma programs. It is comprised of TEN steps meant to guide the reader through tasks to yield meaningful information aimed at the improvement of these anti-stigma programs. The best way to read this section is to print out the e-file or photocopy the paper version of the ASPEP-10 in Appendix 2A and then follow along. ASPEPs ten steps are identified by Roman numerals along the right margin of the worksheet. The corresponding text discussion is organized by the same Roman numerals. The section also includes worksheets for Fidelity Assessment and Program Satisfaction. A step-by-step example is provided in the next chapter. I. What is the Anti-Stigma Program? Anti-stigma programs may address public stigma or self-stigma; hence, indicating the type of stigma is first on the form. The evaluation then focuses on one essential question: does the anti-stigma program of interest have a positive impact on participants? Hence, the first text box instructs the reader to write in the name of this anti-stigma program. This can be a new program developed for this evaluation or one with more of a history, taken off the shelf as it were. Programs are likely to be more successful when they have a manual that specifies the behavioral and interactive steps basic to it. Manual name, which may often mirror that of the program should be listed on the available line. Of their many benefits, manuals often lead to fidelity ratings, assessing whether individual components of the program were in fact completed. That item is marked yes when a Fidelity Checklist already exists. In its absence, a form will have to be developed. Also, related to fidelity is participants satisfaction or dissatisfaction with the components on the fidelity form. II. Who Will be the Target of the Program? Depending on program goals, the target of a public stigma change program may be as broad a group as the general population or more local intents such as encouraging employers to hire people with mental illness. Programs meant to decrease self-stigma typically target people with serious mental illness. Where will the program be held? Most effective for both the intervention and evaluation are sites convenient and comfortable for research participants. Civic club lunches, for example, are excellent venues for employers. When is the evaluation? The question is asking for the timeline of the overall evaluation, not just the anti-stigma program. Keep in mind that the stigma change program is embedded in the larger research enterprise. Several elements may affect evaluation dates, including whether the anti-stigma program has components over several days, the time between the intervention and the follow-

Ten Steps

up, how many subjects are sought for the study, and how many trials are needed to get data for all the subjects? What exactly is meant by follow-up? Post-test is usually collected immediately after the program is over, most likely in the room in which the program was offered. Follow-up is attempting to determine whether any benefits of the anti-stigma program are still present some time later. This suggests the anti-stigma programs may be evoking real change, showing that changes found immediately after the program ended did not quickly return to baseline. III. Who is the CBPR team? An additional who comprises step III; namely, who is responsible for conducting each stage of the evaluation project. This consideration begins with a list of CBPR team members; such a list reminds us that diverse stakeholder ownership must occur from the beginning of the evaluation. Assignments of the remaining ASPEP-10 steps are listed here. Overall authority in the science world is called principal investigator, the person who acts as General to the troops, making sure all the elements of the evaluation are completed in proper order. In continuing the military metaphor, good Generals guide the team through all decisions and activities related to the Evaluation Plan; they are neither unilateral nor dictatorial. Also from the team is the antistigma program facilitator, typically someone who has met some credential for conducting the program. Data collection may fall to a person who enjoys being obsessively careful in handing out and collecting data. A similar virtue is needed for entering data into an appropriate computer program. The person charged with handing out and collecting the data should not be the program facilitator. Subtle biases occur when the person vested in the program is collecting data. Someone else is charged with collecting fidelity and satisfaction data. It might fall within the purview of the person collecting the outcome data. Someone needs to analyze the data, and we have greatly simplified the analysis component in the study. The ASPEP-10 was developed so that people who have completed high school algebra can arrive at reasonably valid conclusions about the anti-stigma program. The last task of the CBPR team is making sense of the data. The kind of to do list suggested here is the ultimate goal of the evaluation. What needs to be done to improve the antistigma program? In some way, this task would seem to nicely return the evaluation to the team as a whole. Especially important, however, are stakeholders with administrative responsibility over the anti-stigma program, the person or persons in the role of keeping the program relevant to participants. IV. Questions Questions and hypotheses are fundamental to research and evaluation activities. Perhaps the most common question is impact: does the anti-stigma program benefit people who participate in it? This question obviously varies across public or self-stigma. Are people from the general public moved by the anti-stigma approach? Are people with serious mental illness who participate in the stigma change program more likely to endorse personal empowerment? Box III also includes questions of singular interest to the specific CBPR team. One area is difference in anti-stigma programs by cluster: gender, ethnicity and spiritual heritage, sexual orientation, SES or other demographic. For example, programs meant to discredit public stigma

Ten Steps

might examine how program effects vary by ethnicity. Do people of South American descent, for example, show less change in prejudice and discrimination compared to those from Western Europe? In terms of self-stigma, do Muslims with mental illness report more empowerment than Christians as a result of participating in the stigma change program? A more complete example of evaluations for group differences is provided in Chapter 5. Another set of questions might examine program effects across special populations: people with mental illness who are also homeless, soldiers, ex-felons, or people with substance abuse problems. Both sets of questions are especially fertile ground for informing the ongoing development of anti-stigma programs. How must a program be enhanced to meet the needs of any cluster not currently addressed well? V. Good Measures and Design Evaluation research needs good instruments, thermometers, as it were, that are sensitive to change brought about by the anti-stigma program (for more thorough discussion of these issues, see Corrigan, Roe & Tsang, (in press) for a comprehensive discussion of measurements and science related to stigma change. We have identified five domains for measuring stigma change: attitudes, behaviors, penetration, knowledge, and information processing (Corrigan, in review)2. We restrict the discussion here to attitudes and behaviors; there are several measures of attitudes and behavioral intentions. Instruments sensitive to public and self-stigma are addressed here. More complete discussion of our measures can be obtained from the Toolkit of Measures. The CBPR team may opt to use a repeated measures research design. In its most common form, measures are collected before the anti-stigma program (pre) and after the program (post). In this guide, positive differences representing subtraction of post from pre, leads to inferences about positive impact; the evaluation supports the idea that the anti-stigma program in fact, leads to beneficial change. Typically, pre and post assessments are given immediately before and after the anti-stigma program when research participants are at hand and do not need to be sought out at a separate time and place. Some research plans include follow-up, repeating post-test measures at a later date. Follow-up addresses the important question whether positive benefits shown between pre and post endure to a later point. Do benefits of the anti-stigma program disappear at some later time? Conclusions are stronger when answers support the affirmative. One week is often used as a follow-up time. Less than one week is too recent; up to three months is possible, though beyond what is unreasonable to think impact of a 60 minute program might yield. Follow-ups are often difficult because research participants do not necessarily want to connect with the evaluation assistant. Sometimes, data can be obtained via the U.S. mail or by phone. Unfortunately, many research participants do not respond to these kinds of later contacts. Alternatively, an e-mail message might do the trick. There is a web-base program called Survey Monkey (surveymonkey.com) with which you can type out the survey and e-mail it to participants. A basic account can be set up on the website for free. Survey monkey instructions can include one or two additional probes at which time the research participant is
2

Corrigan, P.W., (in review). Measuring the impact of change programs for mental illness stigma.

Ten Steps

10

reminded about the follow-up test. Survey monkey includes straightforward training and corresponding FAQs for interested members of the CBPR team. Measures of public stigma. As a reminder, public stigma is the phenomenon in which the general population agrees with the prejudice of mental illness and discriminates against people as a consequence. Attitude measures include assessments of stereotype, emotional reactions to those stereotypes, and behavioral intentions. The Toolkit has several measures that assess public stigma; we believe the 9 item Attribution Questionnaire (AQ-9) has multiple characteristics that commend it here. It is reproduced in the Appendix as a sheet that can be disseminated to research participants as a pencil-and-paper measure. That does not mean that other measures in the toolkit or from the broader realm of relevant research might not do a better job in assessing change, only that the AQ-9 provides a nice example of assessing stigma change. The AQ-9 is also reproduced in Table 1. It is a reliable and valid short form of the longer 27 item Attribution Questionnaire (which is also available in the Toolkit). The nine items of the AQ-9 represent the nine concepts that comprised our model of stigma. Briefly, those who view people with mental illness as responsible or to blame for their disorder are more
___________________________________________________________

Table 1. Items that comprise the Attribution Questionnaire


Harry is a 30 year-old single man with schizophrenia. Sometimes he hears voices and becomes upset. He lives alone in an apartment and works as a clerk at a large law firm. He had been hospitalized six times because of his illness. Below are nine statements about Harry, on a nine point scale where 9 is very much. Write down how much you agree with each item. 1. I would feel pity for Harry. 2. How dangerous would you feel Harry is? 3. How scared of Harry would you feel? 4. I would think that it was Harrys own fault that he is in the present condition. 5. I think it would be best for Harrys community if he were put away in a psychiatric hospital. 6. How angry would you feel at Harry? 7. How likely is it that you would not help Harry? 8. I would try to stay away from Harry. 9. How much do you agree that Harry should be forced into treatment with his doctor even if he does not want to?

likely to be angry with them, which subsequently undermines their desire to help those with mental illness. Conversely, those who do not blame people for their disorder, who actually view people with mental illness as victimized by it, react with pity which enhances helping responses. People with mental illness may also be viewed as dangerous. This leads to fear

Ten Steps

11

which results in social avoidance; I do not want to be near people with mental illness, or, I do not want to work by them. Fear also affects prominent themes about mental health care: segregation, people with mental illness need to be sent away to hospitals or custodial community programs to protect the public, and coercion, treatment decisions need to be made by authorities so people with mental illness do not harm the public. All the underlined constructs in this paragraph directly correspond with the items of the AQ-9. The constructs sort nicely into the three components of stigmatizing attitudes: stereotypes (blame and dangerousness), emotional reaction (anger, pity, and fear), and behavioral intention (help, avoidance, segregation, and coercion). Nine individual scores, component scores, or a single overall score may be used as impact factor. Measures of self-stigma. Self-stigma occurs when people with mental illness internalize the prejudice of stigma leading to diminished self-esteem. Personal empowerment is the opposite of self-stigma. Hence, the Rogers et al (1997) test called the Empowerment Scale -5 (ES-5) is a useful tool for assessing self-stigma (reproduced in Appendix 2B). It is also summarized in Table 2 where it is labeled the Making Decisions Scale consistent with Rogers et al. The ES-5 yields scores that correspond with five factors: self-esteem/self-efficacy, power/powerlessness, community activism/autonomy, optimism/control, and righteous anger.
___________________________________________________________

Table 2. Items that comprise the Making Decisions Scale-5


Below are several statements relating to ones perspective on life and with having to make decisions. Please write the number that is closest to how your feel about the statement. Indicate how you feel now. First impressions are usually best. Do not spend a lot of time on any one question. Please be honest with yourself so that your answers reflect your true feelings. 1. I can pretty much determine what will happen in my life. 2. I generally accomplish what I set out to do. 3. People have the right to make their own decisions, even if they are bad ones. 4. People have no right to get angry just because they dont like something. 5. I rarely feel powerless.

Platforms and other operational decisions. The measures may be administered in several different ways. They can be self-administered as pencil-and-paper tests. The measures are handed out to research participants with the instructions to complete using a pen or pencil. This is an efficient way for participants and experimenters to collect data. However, some people with mental illness may be unable to complete this kind of test alone because of cognitive difficulties. In that case, a research assistant sits down one-to-one with the research participant

Ten Steps

12

and reads the test as an interview. These kinds of interviews can be completed by telephone, a strategy that allows research contacts for those who are less able to travel to the research site. Online services like Survey Monkey mentioned earlier also accomplish this task, though this approach requires the research participant to have access to a personal online account. Tests reviewed here and included in the Toolkit yield several different factors that can be used as outcomes. Typically, only a subset of items are used for the evaluation. Using too many may dissuade the research participant from completing the scale. Moreover, making sense of the data is a bit more difficult with too many measures. For this reason, we recommended no more than three items to be included in assessment. How might these be chosen? The CBPR team should look at a collection of measures and outcomes like that provided in Table 1 or 2 and identify those that seem relevant to the issues of interest at the time of the evaluation. VI. Comparison Group The heart of research and evaluation is comparison; i.e., fundamental differences (subtracting one score from the other ) in two scores as outlined in Box VI of the ASPEP-10. It can be assessed over time (is there a difference between pretest and post test?) or across groups (Does the anti-stigma group show more stigma change than another group?). Note that the CBPR team must choose either time or cross group comparisons. The experienced reader might note a combination of time and group provides a useful way for analyzing data but exceeds the limited goals of the ASPEP-10. Comparing pretest and posttest indicates whether the stigma change program has reduced stigma. Comparing between pretest and follow-up indicates whether positive benefits due to the anti-stigma program were evident some time later. Over time decisions include number of days to follow-up and how to obtain it. Research participants are instructed to return for the follow-up, though many participants may find such a request to be onerous. More user-friendly approaches are also possible like phone interview, regular mail, or survey monkey. The CBPR team needs to plan follow-ups before beginning an evaluation because in most instances, use of the strategies requires additional data gathering at post-test. Phone numbers, street addresses, or email addresses are needed for these follow-ups. Alternatively, group comparisons require specification of a group that will be compared with the anti-stigma program. Perhaps the simplest would be a group that received no intervention, called the no intervention control group. Alternatively, another intervention might be selected as the foil to the indexed anti-stigma program. For example, research may be seeking to understand impact of a contact program by comparing it to an education strategy. How do research participants end up in one or the other group? Experts would say random assignment is essential. One way to do random assignment is to take two possible research participants (Mr. A and Ms. B), flip a coin and assign A to the anti-stigma group if heads, or to the control group if tails. B moves the opposite, to the control group for heads and the antistigma group with tails. Random assignment is difficult to do because of many constraints. Employers at a Rotary club meeting, for example, may not be willing to use time to be mixed up for the research. In such a case, there are two considerations to group assignment that the CBPR team should mull over. First, do not assign to group by demographic; e.g., all men go to the anti-stigma group, women to the control. All Europeans go to the control group, Africans to

Ten Steps

13

the anti-stigma group. Second, make sure both approaches are used at a research spot or meeting. Do not, for example, use the anti-stigma intervention for Rotarians on Monday and the control approach for those at Tuesdays Chamber of Commerce meeting. This point may be clearer in the example later in the chapter. Finally, how many research participants are needed in a group? Twenty four research participants should be enough for comparisons of pretest, posttest, or follow-up. 24 participants are needed PER anti-stigma intervention and control group. VII. Table Starting with the Table and Graph are the ASPEP-10 steps that yield most trepidation for readers. The analyses are then laid out in sections VIIa through VIIc in a straightforward manner using a specific example. First is a grid to enter all the data. The Table is organized into three columns labeled M1, M2, and M2 (Measurement variables 1 through 3). Enter each research participants responses to each of the three variables in the Attitudes Sheet (reproduced in Appendix 2B) in their respective spaces. The next two rows in the grid represent groups or time. Three columns are provided for group (Group 1, Group 2, and if used in the study, Group 3). Group 1 is always the anti-stigma group of interest. When focusing on group comparisons, at least one other group needs to be listed. Write that group name on the appropriate line. Alternatively, the evaluation might be over time; pre, post, and also perhaps follow-up. Research participant data do not need to be entered in any specific order. The last row is the average of all scores in the corresponding column. Graphs are used to gain an overall sense of the data and determine if there is a true difference. Space to build each graph is provided in VIIa through VIIc of the ASPEP-10. Note that graphs correspond solely to one of three measures. These are provided so individual graphs can be completed for up to three measures; measure labels are those listed in the table and entered in the corresponding space. The figure on the left is used for time comparisons (i.e., pre, post, and follow-up), the one on the right for group comparisons. The horizontal axis and vertical axis need to be completed before entering the bars. The horizontal axis (the x-axis) lays out comparisons into three possible conditions. One bar is needed for each condition (e.g., three if pre, post, and follow-up were assessed in the evaluation; two if the indexed anti-stigma group is compared to a no intervention control). The vertical or y-axis is calibrated next. Enter a value slightly larger than the highest value in the Table for each measure in the Hi space of the graph. Then divide that number by five. The results are the units for the y-axis. For example, data in a table indicates the highest score for Measure 1 is ten. Ten divided by five is two. Hence, each point on the scale is a multiple of two: 0, 2, 4, 6, 8, 10. Lastly averages of each variable for each condition are entered into the graph as a vertical bar. An example of the bar is very lightly colored in the graph for the pretest condition for Measure 1. This is only meant as an example; the bar graphs you generate reflect averages from the corresponding columns of the tables based on your data. How does one know that a difference in heights between two conditions is significant and not just some error of the sample? This is the basic question of statistics and is answered in the table on difference, significance, and meaningfulness. The expert may note the steps in these tables challenge some of the assumptions of statistics. We decided on the rules outlined here as

Ten Steps

14

a way to make a reasonably sound evaluation that is accessible to CBPR team members without statistical expertise. The column representing group or time is checked depending on the research design. Differences are then determined for important combinations; for time, that would be pre-test minus post-test and pre-test minus follow-up. It is very important that the direction of the subtraction follows the order in the table (e.g., it is pre-test minus post-test and not the other way around). Group differences may also be tested and the group differences column is used in this situation. Once again, the correct order of subtraction is essential: Group 1 minus Group 2, or Group 1 minus Group 3. These difference scores are the numerator (the top) of the ratio in the third column. The denominator (bottom) of the ratio is two. Cases in which the ratio is larger than one are significant and starred (*) in the far right column of the difference table. A positive number is good and supports the assumption that participants in the anti-stigma program showed less stigma after participating in that program. Cases when the ratio are lower than negative one (-1) are also significant and should be marked with a pound sign (#). Cases that yield a negative one actually show the anti-stigma program makes stigma WORSE. We especially encourage CBPR team members to heed and carefully consider findings showing negative effects. VIII. Making Sense of the Data. Here is where the CBPR team makes sense of the data gathered from the ASPEP-10 evaluation effort. A tick is entered in each of the appropriate places in Section VIII -- positives (*) or negatives (#). Note, three sets of rows are provided in Section VIII corresponding with up to three measures in the study. Lines are then provided for time and group under each measure. Checkmarks are only entered into rows that correspond with the type of comparison. A zero (0) is entered in the space marked none when neither positive (*) nor negative (#) significant differences were found. How many checks are expected in the Making Sense of Data box? Studies that only tested a pre-post design or only two groups (e.g., anti-stigma condition versus non-intervention control group), will yield only one tick per measurement (three total) in the box. Studies that comprise a more complex design will result in more checks. Consider a study that examined pre versus post, pre versus follow-up and post versus follow-up. This means three ticks per measurement (or nine total possible). Similar permutations are evident in group designs. All ticks in each column are then summed and entered into the Total boxes. What does this mean? If sum of positive ticks is greater than the sums of negative or the sums of none, then the evaluation project suggests the anti-stigma program works, specifically that it has positive effects on program participants. The CBPR team needs to consider what is effective in the program and continue to highlight it as keys to productive stigma change. Situations where none is greater than positives suggest the anti-stigma program may not be as effective as desired. This suggests the CBPR team needs to carefully consider how to strengthen the program. Situations where negatives are higher than the other two sums should raise alarms. Not only is the anti-stigma program not effective, it actually seems to be doing harm. This

Ten Steps

15

situation calls for a must; namely the CBPR team must make changes. The program cannot remain in its current state. How does the team decide what to change in the existing program? Two processes are highlighted in the ASPEP-10: modify the program (and its manual) or teach and supervise staff to correctly administer it. What aspects to specifically modify or teach is indicated by the Fidelity and Satisfaction assessments. Assessing fidelity. Fidelity is whether group facilitators conducting the anti-stigma program do so in a manner consistent with the guidelines laid out in the manual; essentially that facilitators are being faithful to the steps of the program. This is done using the Fidelity Checklist in Appendix 2C. To complete the checklist, a research assistant (RA) sits unobtrusively in the back of the room and checks off behaviors as the facilitators exhibit them. In essence, the RA is answering a series of yes/no questions. Yes or no: did the facilitator show a specific component of the program? For example, did the facilitator say the purpose of the anti-stigma program during the introduction of the stigma program? There are separate checklists for programs representing contact-based approaches versus education-based strategies. The Fidelity Checklist includes generic components of an antistigma program and components that are specific to the indexed program. Components are grouped in terms of purpose. Introduction in the generic list for education programs is meant to orient participants to the facilitator and program. Teaching facts in the education program is the core of this kind of program. It seeks to increase knowledge about mental illness, specifically in four areas: illness and symptoms, hope, effective biological treatments, and effective psychosocial treatments. Rubrics in the shaded rows are only meant as organizing concepts for the Fidelity Checklist and are not considered in the summary. It is the indented components on which the RA should focus. One of the organizing concepts in the generic column for education is label avoidance. RAs only examine components listed under it. Yes or no: did the facilitator: Explain the low use of services even when people might benefit from them?; Explain how people attribute low service use to avoiding stigma?; and Identify specific stigma that leads to label avoidance? The CBPR team should only keep components in the Fidelity Checklist that correspond with their actual anti-stigma program. The generic column for the education and contact fidelity checklists has more that 30 possible components meant to be a comprehensive list of behaviors from which the research assistant might check. However, many of these may not be relevant for the program developed by CBPR advocates. In this case, specific components unrelated to the program are deleted from the list by using a black marker to strike those items. The RA scratches out components prior to the program session with feedback from the program facilitator. In addition, a program may have components specific to it, such as facilitator behaviors that make the program unique and different. For example, a spiritually focused anti-stigma

Ten Steps

16

program might incorporate ideas and ceremonies from a specific rite. Facilitators of a contact program for employers might focus their stories on work life. Ample spaces are provided for idiosyncratic components but the CBPR team should not feel compelled to fill up all the spaces. The RA, then, checks all the generic components as well as the ones specific to the anti-stigma program of interest during the program. Ratios are then determined for components under the various concepts of the programs. The ratio is the number of observed component behaviors divided by the total for that section. For example, there are five components under teaching myths on the Education Fidelity Checklist. The Ratio is the number of these myths discussed during the program divided by total possible (five). Total corresponding with each of the generic concepts are already printed in the denominator of ratios for the generic list. Ratios are reported as percents so the division in the table is multiplied by 100. For example, three components out of a total of five are reported as a percent. 3 / 5 x 100 = 60% The denominator decreases in instances when the CBPR team has blacked out individual components. Hence, if the dangerous myth is removed from the fidelity sheet, then the denominator reduces to four; in the example, that means 3 / 4 x 100 = 75%. We used fairly conservative ratios for identifying high and low fidelity items. Components with ratios higher than 80% suggest well-used program components and are circled in the Table. Those under 33%, imply targets of ongoing program development and facilitator training and are highlighted. These components are not being implemented to the degree expected in the program. Ratios work with the same strategy for specific components. The CBPR team defines the number of components under each concept. For example, they may decide to add two myths to the list under education: moral repugnance and physical disgust. A total of two components now occur under teaches myths, defining the denominator of that ratio. The resulting ratio of a facilitator observed to demonstrate only one of these two components is 1/2 x 100 = 50%. Once again, concept areas should be circled where ratios are below 33%. Fidelity checklists are easier if the anti-stigma program has a manual prescribing the program components. Development of this kind of manual is often beneficial in its own right. It requires program facilitators to take stock of what they will do to help research participants diminish stigmatizing attitudes and discrimination. Manuals require facilitators to identify discrete behaviors that comprise a well-working stigma change effort. Even in situations where program facilitators are uninterested in manual development, advancing a fidelity instrument helps the CBPR team develop a broader picture as to what the program is supposed to do. Assessing satisfaction. One of the difficulties of fidelity assessment is the requirement of a research assistant to monitor program components as they are used in the session. As a result, the CBPR team requires an individual to collect these data, sometimes an unavailable resource. An alternative approach to evaluating the program per se is to assess participant satisfaction with program components, believing that items rated relatively high in satisfaction are more effective, and those rated low have less impact. There are many items on which satisfaction can be determined. No more than ten should be included because research participants are unlikely to complete a long satisfaction form. A blank Satisfaction with Program form is included in Appendix 2D. Individual items for the form should be selected by the CBPR team from the

Ten Steps

17

fidelity checklist. The form should comprise items which the team believes to be most important. Research participants are instructed to rate satisfaction with items on a 7-point scale, where seven equals very satisfactory. Responses are collected and the research assistant then tallies responses. The Satisfaction with Program Tally Sheet copies verbatim the ten items in the Satisfaction with Program form. The Tally Sheet has cells to check components for which a research participant rated an individual item greater than 5 (meaning satisfactory) or less than 3 (unsatisfactory). Ratios are then determined by the number of checks in each box divided by the number of research participants in the evaluation. Ratios are circled if they are greater than 66%, signaling research participants were satisfied with the individual component. Ratios are highlighted when the ratio of dissatisfactions to total is higher than 66%, suggesting an unsatisfactory component. Highlights and circles are used to fill out the bottom half of Table IX. IX. Making Sense of Fidelity and Satisfaction Data Information from the Fidelity Checklist and the Satisfaction With Program Sheet are entered in Section IX of the ASPEP-10. Program components with the three largest ratios are entered first in the Fidelity Checklist Summary. Those three with the lowest scores are then entered into the box. Similarly, the three most extreme satisfaction and dissatisfaction ratios are entered into IX. Fidelity and satisfaction add the meat to the evaluation process. In cases where some adjustment of the anti-stigma program may be warranted, these indices suggest specific components that are currently strong in the program and need to be nurtured, versus those which are weak and may need to be the focus of subsequent program development or facilitator education. X. To Do List The To Do list begins with a bold consideration. Are there so many negatives findings from the data that the program should be discarded? This is meant to be provocative among other things, but is rarely implemented. It is the last two pieces of business in Section X that are important here. What tasks are needed to enhance the anti-stigma program? We have framed these options in two ways. What program components need to be modified to enhance the programs overall impact? What components should facilitators be taught to enhance their impact? Enter the program components that were viewed as absent or least satisfactory in the Fidelity Checklist or Satisfaction with Program form. CBPR members then decide how to move ahead with these findings.

Ten Steps

18

Chapter 4 Ten Step Plan Summaries


The ASEP-10 and supporting documents (i.e., Fidelity Checklists and Satisfaction with Program forms) are fairly complex and thus may dissuade readers from using it. For that reason, we provide a succinct summary of the ASPEP-10 users guide in Appendix 2F, which can be easily copied and used alongside the three sections of the Guidebook.

the ASPEP-10 the Fidelity Checklist the Satisfaction with Program


These steps are laid out in the same numbering system as the ASPEP-10 instructions, paralleling the way they are described herein.

Ten Steps

19

Chapter 5 An Example of an Anti-Stigma Evaluation Plan (ASPEP-10)


We provide an example of the evaluation plan to illustrate the various parts of the ASPEP-10 and related forms. The example is proffered here in the same Roman Numeral outline that can be found on the ASPEP-10. I. What is the anti-stigma program? This example investigated a public stigma program called First Person Stories, Inc., a contact program. The program had both a manual and fidelity checklist.

What is the(check stigma? Type of Stigma one)

public PUBLIC

self SELF

What is the anti-stigma program? First Person Stories, Inc. -- contact


Is there a manual for the program?

Us_________
Does it have a fidelity measure?

Yes_X__ No____ Name of manual_

About

Yes_X__ No____

If no, one will need to be developed.

II. School teachers were the target of the program to be conducted in the faculty dining room starting August 12, 2009 and continuing through September 5, 2009.

Who is the target of the program?

II

- for public stigma, possible targets are the general public, high school students, employers_school teachers_ - for self-stigma, targets are usually people with mental illness _____________________________________ How many targets will participate in the study (at least 25 per group) _________________________

Where will the program be provided?__at faculty dining area at school


___________________

Ten Steps

20

III. Six people comprised the CBPR team with the five tasks of the team split among them.

WHO
. Beverly Mills George Williamson . ___ Fran Olsen . , . Bob Mangley ____________ . .

III

______________________________________________________________________________________ Is the CBPR team Pat Corrigan . Jane Miller

is responsible for the overall evaluation by defining the questions and hypotheses? ___Jane__________ is going to conduct the anti-stigma program(s)? ______ George_________________________________ is going to collect the outcome data and enter into a computer file?

IV. The question guiding the evaluation program was whether the contact program affected participant attitudes.

Question(s)

Examining change due to anti-stigma program

IV

How does First Person Stories, Inc. affect stigmatizing attitudes of participants immediately after the program and two weeks later? . . V. Three items were measurement items that were selected for the study. Dangerousness was included because it is a primary stereotype of mental illness. Dangerousness leads to fear. People who are afraid of those with mental illness avoid them.

Good Measures and Design


Name of instrument(s) to examine impact of anti-stigma program M1?______dangerousness_____________________________________ M2?_______fear____________________________________ M3?_____avoidance______________________________________

Ten Steps

21

Comparison Group OVER TIME: yes? __ X_____


__X___pre __X___post __X___follow-up _______

VI
ACROSS GROUPS: yes? _______ Is this a wait-list control group yes? ______ Name of other comparison group(s) _____________________________________

10 days__________

number of days from post to follow-up

Note that for across group data, measures are collected one, at posttest.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Average

Measure 1_dangerousness__ Grp 1 Grp 2 Grp 3 ? _______ Pre Post F-up ? 9 4 5 8 5 3 7 4 3 7 2 4 9 4 3 8 6 1 7 3 5 6 4 4 7 2 5 9 5 4 8 1 3 7 2 4 9 3 6 9 6 3 7 3 4 6 2 2 8 4 4 7 3 3 9 4 3 7 4 2 5 3 4 8 2 6 7 2 1 8 4 1 8 3 2 7.60 3.40 3.40

Measure 2_fear_______ Grp 1 Pre 7 9 8 7 9 9 7 6 8 7 9 7 5 8 7 8 8 7 9 8 7 9 4 6 8 7.48 Grp 2 _______ Post 5 3 2 6 3 4 2 5 1 2 3 6 3 2 4 3 4 4 3 2 2 3 4 2 3 3.24 Grp 3 ? F-up ? 7 4 5 7 9 8 7 9 9 7 6 8 7 9 7 5 8 7 8 8 7 9 8 7 7 7.32

Measure 3_avoidance__ Grp 1 Pre 5 6 7 7 5 4 5 7 7 7 8 7 8 6 7 4 5 5 6 5 6 5 7 8 7 6.16 Grp 2 _______ Post 8 8 9 8 9 9 9 8 9 9 8 7 9 9 9 8 8 7 8 9 9 8 7 8 8 8.32 Grp 3 ? F-up ? 5 6 5 6 5 7 8 7 5 5 6 5 6 5 7 8 7 5 6 5 6 5 7 8 7 6.08

V. This was a time series design with pre, post, and follow-up.

Ten Steps

22

VI. The study was conducted over time at three measurement points, pre, post, and follow-up. Follow-up data was to be collected 10 days after post-test. Section VII discusses completion of the table below. VII. Data were entered into columns for pre-test, post-test, and follow-up. These three columns then fell under the three measures chosen for this study: dangerousness, fear, and avoidance. Data from 25 research participants are provided in the Table. Averages per column are in the last row of the Table. VIIa. A bar graph is then completed for pre, post, and follow-up data. This is for data over time and hence TIME is circled. Note that the graph for group comparisons is crossed out. Graphs correspond with dangerousness, fear, and avoidance respectively. The graph in VIIa represents averages for dangerousness. Before entering individual bars of the graph, the y (vertical) axis needs to be calibrated. We chose 10.0 as the cap because 8.32 was the highest score. Zero was chosen as the bottom because it was the lowest conceivable response. The average pre-test score for dangerousness was 7.60, hence a bar was entered to this high point. Post-test scores were 3.40, as were follow-up scores. Bars of the same height were entered in the graph for post-test and follow-up. What does examination of the graph suggest? Post-test seems a lot lower than pre-test suggesting dangerousness stigma decreased during the course of participation in First Person Stories, Inc. No change was evident from post-test to follow-up suggesting beneficial effects remain over time. Research participants with fewer dangerousness beliefs showed this improvement.
Measure 1___dangerousness_______
COMPARISON IS TIME OR IS GROUP [GRP 1 is always anti-stigma program]

Hi__10__ M8 E A6 S U R4 E # _2__ 0 M E A S U R E # __

Hi ____

VIIa

Pre

Post

F-Up

Grp 1 anti-stigma

Grp 2_______

Ten Steps 23 Grp 3_______

In the subsequent textbox, change is defined as subtraction; for example, pre-test minus post-test. Differences vary based on a whole bunch of things not worth discussing in the Guidebook. This kind of correction occurs by dividing the subtraction scores by two. The difference between pre and post-test divided by two is 2.1. Last to do in the table is the central goal of the question. The ratio is considered to be significant if it is greater than 1.0, which is found for both the pre to post-test ratio and the pre to follow-up result. The anti-stigma program had positive effects on dangerousness.

Ten Steps
Is this difference significant and meaningful? Group ___ differences Grp 1 Grp 2 = Grp 1 Grp 3 = Grp 2 Grp 3 = Time _X_ differences Pre Post = 4.2 Pre F-up = 4.2 ratio= differences 2 2.1 2.1 If >1.0 significant (*) If >-1.0 significant (#)

24

* *

VIIb. This section shows a graph and table for fear, the second measure in the study. The yaxis is calibrated similar to the graph in VIIa. Bars for pre-test go as high 7.5. Post-test is lower, at 3.2. The bar for follow-up is again, 7.3. The graph suggests a big decrease was found after research participants completed the First Person Stories, Inc. program. However, the follow-up score actually went back up to almost the pre-test score. This kind of finding suggests any type of benefit that occurs immediately after the anti-stigma strategy returns to baseline. These data show benefits do not have a long-term impact.

VIIb
Measure 2_____fear___________ COMPARISON IS TIME Hi__10__ M8 E A S 6 U R4 E # 2 0 Pre Post F-Up

OR
M E A S U R E # __

IS GROUP Hi ____

[GRP 1 is always anti-stigma program]

0 Grp 1 anti-stigma Grp 2_______


name

Grp 3_______
name

Apparent difference in bars in the graph is supported by the Table below. Namely, subtracting post from pre is 4.24 which, after dividing by 2 yields a ratio (2.12); being higher than 1.0, it is a significant finding. Also note that the ratio of difference in row two is lower than 1.0 and is therefore not starred (*).

Ten Steps Group ___ differences Grp 1 Grp 2 = Grp 1 Grp 3 = Grp 2 Grp 3 = Time _3__ differences Pre Post = 4.24 Pre F-up = .08 ratio= differences 2 2.12 .08 If >1.0 significant (*) * If >-1.0 significant (#)

25

Ten Steps

26

VIIc. Finally, the graph in VIIc illustrates an example of harmful effects. Entering averages from the last three columns in the graph yields results of concern. Namely, reports of avoidance went up from pre to post-test, indicating avoidance got worse. The difference score was -2.16 which, after divided by 2, is less than -1.0 and earns an #. Let us stop to consider what this means. Something about the program provided by First Person Stories, Inc. actually harms research participants. CBPR team members need to critically re-examine anti-stigma programs that result in negative effects.
Measure 3____avoidance_________________ COMPARISON IS TIME Hi__10 __ M8 E A 6 S U R4 E 2 # ___ 0 Pre Post F-Up

VIIc
IS GROUP Hi ____ [GRP 1 is always anti-stigma program]

OR
M E A S U R E # __

0 Grp 1 anti-stigma Grp 2_______


name

Grp 3_______
name

Is this difference significant and meaningful?


Group ___ differences Grp 1 Grp 2 = Grp 1 Grp 3 = Grp 2 Grp 3 = Time ___ differences Pre Post = -2.16 Pre F-up = .08 ratio= differences 2 -1.08 .78 If >1.0 significant (*) If >-1.0 significant (#) #

VIII. Findings from the graphs and tables are all collapsed into one place, summarized in the textbox on the next page.

Ten Steps

27

Making Sense of the Data:

VIII
Note that all spaces for group differences are struck from the Table. Difference scores for pre to post-test and pre to follow-up remain. Positive effects were found for dangerousness. Stigma lessened from beginning to immediately after the program and remained improved 10 days later at followup. A mixed package emerged for fear. Improvement was noted from pre to post-test, but no change was found at follow-up. Findings for avoidance were sobering. Avoidance actually worsened from pre-test to directly after completion of the program. No difference, however, was found between pre and follow-up suggesting the negative effect corrected itself during the 10 days following. At the bottom of the Table are the totals. What do the six sets of findings show? Half supported positive effects. A third showed neither good nor bad effects. One finding yields worse effects. As discussed earlier, negative findings are especially of concern.

For Measure 1: _dangerousness_____________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) __X_ neg (#) ___ none____ pre to f-up pos (*) __X_ neg (#) ___ none____ add all for a subtotal __2__ __0_ __0_ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ For Measure 2: __fear____________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) __X_ neg (#) ___ none____ pre to f-up pos (*) ____ neg (#) ___ none_ X__ add all for a subtotal __1__ ___0___ __1__ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ___ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ For Measure 3: ___avoidance___________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) ____ neg (#) _X_ none___ pre to f-up pos (*) ____ neg (#) ___ none _X_ add all for a subtotal __0___ __1___ __1___ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ____ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ____ none____ add all for a subtotal ______ ______ ______ 1 2 add all subtotals

TOTAL 3 Information from the Making Sense of the Data textbox is meant to inform the To Do list. Also of importance is information from the Fidelity and Satisfaction findings, found in Section IX on the next page. The Fidelity Checklist on the next page was completed by a research assistant sitting quietly at the back of the room and checking off generic and strategy specific component behaviors as they appeared during the program. Note that several of the generic FIDELITY CHECKLIST
program)

CONTACT _First Person Stories, Inc (name of

Ten Steps
Check Check

28

X
If the component is observed XXXX X X X = x/3 XXXX X = x/2 XXXX X X X X X = x/5 XXXX X

GENERIC COMPONENTS

X
If the component is observed XXXX X

COMPONENTS SPECIFIC TO THIS ANTI-STIGMA PROGRAM

Introductions -- Name of facilitators and program -- Purpose of meeting -- Personal goals RATIO 1.00 Evaluation -- Explain the need for pre-test measure -- Obtain permission to participate -- Administer pre-test before program begins RATIO .50 Stories of facilitator 1 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO 1.00 Stories of facilitator 2 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO .25 Stories of facilitator 3 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO XXXX Discussion -- Invites comments from program participants -- Asks questions to stimulate conversation -- Reflects back comments -- Refer to facilitator stories to illustrate issues RATIO .50 Follow-up and homework -- Assign some kind of self-monitoring task. -- Inform participant of time and place where home work will be discussed/reviewed -- Obtain information to seek out participant for follow-up. RATIO .00 Conclusion -- Summarize key points of program RATIO XXXX Post-test -- hand out posttest

= x/1 XXXX

= x/? XXXX X X

= x/2 XXXX

= x/5 XXXX

= x/? XXXX

= x/5 XXXX

= x/? XXXX X

X X = x/4 XXXX

= x/1 XXXX

Introductions PLUS -- introduce people in the audience --RATIO 1.00 Evaluation PLUS ---RATIO .0 Stories of facilitator 1 PLUS -- stress experiences in county jail -- review homeless history ---RATIO 1.00 Stories of facilitator 2 PLUS -- discuss bad experiences with treatment ----RATIO .00 Stories of facilitator 3 PLUS -----RATIO XXXX Discussion PLUS -- randomly ask questions of participants ---RATIO 1.00 Follow-up and homework PLUS ----

= x/3 XXXX = x/1 XXXX

= x/? XXXX = x/? XXXX

RATIO XXXX Conclusion PLUS -RATIO XXXX Post-test PLUS --

Ten Steps

29

components were omitted from the fidelity analysis, including explanation of the needs of pretest measurement (because the group had participated in other anti-stigma program evaluations in the past), stories of good treatments from facilitator 2 (in fact, facilitator 2 was concerned about bad aspects of recent treatment), and all the components related to stories from facilitator 3 (because facilitator 3 decided he did not want to participate in First Person Stories, Inc. at the time of the study). A few specific program components were added to the checklist, including facilitator 1 stressing his experiences in the county jail and homeless history, facilitator 2 reviewing bad experiences in treatment, and overall, asking questions to participants. In reviewing marks on the Fidelity Checklist, note that facilitator 1 showed all the components of the program assigned to her. Facilitator 2, however, missed many of the expected programs, only recounting on the way down stories. None of the follow-up and homework components were observed during the program presentation. Information from the checklist is then used to complete Table IX. Those with the lowest fidelity ratios Best and Worst from IX (highlighted in the Fidelity Checklist) may Fidelity Checklist be excellent candidates for things to change Best in the program. The best scores with the _1__stories of facilitator 1______________ highest ratios may be especially important to continue in future uses of the program. _2__discussion Another way to determine good plus__________________ versus not so good components of the intervention is completion of the _3__introductions__________________ Satisfaction with Program and related forms __ (starting on the next page). The list of Worst components from the Fidelity Checklist is _1__evaluations___________________ reviewed to identify ten items for the ___ Satisfaction with Program form. Specific items should be identified from the Fidelity Checklist by the CBPR team, including those which seem most important or most likely to indicate especially important components for program development. The team should also consider the value of including generic versus program-specific components or some mix thereof. The ten items are reformatted to fit a selfadministered test like the one on the next page. Research participants are instructed to answer each item in the list using the seven-point satisfaction scale. For example, research participants who were mostly dissatisfied with on the way down stories by facilitator 1 might give that item a 2 from the scale. The same person rated asks questions to stimulate conversation a 6.

Ten Steps

30

SATISFACTION WITH PROGRAM form Name or ID Number_______3675_________ Using the satisfaction scale, rate your satisfaction or how pleased were you with the following components of the program.
Very unsatisfactory Very satisfactory

Enter specific items here Satisfaction Rating -On the way down stories 2 -- On the way up stories 6 -- Stories of hope 3 -- Stories of recovery 2 -- Invites comments from program 5

6 5 3 2 1

participants -- Asks questions to stimulate conversation -- Assign some kind of selfmonitoring task. -- stress experiences in county jail -- review homeless history -- discuss bad experiences with treatment

Completed program satisfaction ratings are then summarized in the tally sheet. The sheet ticks items in the columns labeled SATISFIED or DISSATISFIED for individual components. The CBPR team member tallying these ratings checks an item as satisfactory if the research participant rated a component as greater than 5 and marks as an unsatisfactory item if the score was below 2. So, for example, the CBPR, team member would check the tally asks
Tally Sheet

Ten Steps
Enter specific items here

31

SATISFIED
Enter one tick for each research participant who rated the item higher than 5

RATIO
Divide satisfied by total N (_25_); circle if greater than .75

DISSATISFIED
Enter one tick for each research participant who rated the item less than 3.

RATIO
Divide dissatisfied by total N (_25_); highlight if greater than .75

-- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Invites comments from program participants -- Asks questions to stimulate conversation -- Assign some kind of selfmonitoring task. -- stress experiences in county jail -- review homeless history -- discuss bad experiences with treatment

///// //////////// ////////////////// ////////////////// ///// ///////////////// /// /////// ////////

.20 .48 .72 .72 .20 .68 .12 .28 0 .32

////// /// // / //////////// /// /////////////////// //////// ////////////////// ////////

.24 .12 .08 .04 .48 .12 .75 .32 .82 .32

Ten Steps

32

questions to stimulate conversation as satisfied for the person filling out the Satisfaction Rating sheet above. Ratios are determined by the number of checks in each box divided by the number of research participants in the evaluation. In the sample tally sheet, 18 research participants rated satisfaction with stories of hope greater than 5 which, given that there are 25 research participants, yields a ratio of .72. Ratios are circled if they are greater than 0.66 signaling research participants were satisfied with the individual component. Ratios are highlighted when the ratio of dissatisfactions to total is higher than 0.66 suggesting a unsatisfactory component. Highlights and circles are used to fill out the bottom half of Table IX. Three components were rated as highly Satisfactory-Unsatisfactory Program IX satisfactory (>0.66) and hence included in Components Section IX as such. Two components were rated Satisfactory as highly unsatisfactory and included in the textbox. The information in Section IX is used _1__asks questions to stimulate to complete the To Do list in Section X. Note conversation that the To Do list is not just to focus on what _2__stories of is wrong with the program, but also what works hope___________________ well. The latter components are the firm base in which program development occurs. _3__ stories of Four possible issues might be relevant to recovery________________ modifying the program. Introducing each other Unsatisfactory and follow-up on homework were identified on the Fidelity Checklist as least often used in the _1_assign some kind of selfprogram. Consider the
monitoring task

TO DO LIST:
replace anti-stigma program: check if yes _______
(review for alternative programs) MODIFY program based on fidelity and satisfactory/unsatisfactory _Introduce program participants to each other. Ask questions to stimulate conversation . Examine specific components of discussion . . TEACH facilitators based on fidelity and satisfactory/unsatisfactory
__Consult with facilitator 2 regarding her stories about homelessness.

review role of evaluation

significance of this point. It might suggest that facilitators need to specifically keep track of introductions and homework. Alternatively, these findings might suggest introductions and

Ten Steps

33

homework are unimportant and can be discarded from the program with no harm. Two other issues might be important for program modification: inviting comments from participants and assigning self-monitoring homework. These were ranked as least satisfactory, so the CBPR team should decide whether to alter them so they become more appealing to program participants. Two issues emerge regarding teachable concerns. An agent of the CBPR team may wish to consult facilitator 2 regarding her story about homelessness. Other teachable foci are use of self-monitoring tasks and the role of follow-up and homework. We reiterate the point made earlier in the guidebook. The To Do list is solely meant to provide suggestions. Regardless of the findings, facilitators and others involved in the anti-stigma program should be approached by the CBPR team as mutually respected peers. The anti-stigma program has a history based on the efforts of others. Components of the program should not be set aside in a cavalier manner. Healthy discussion among CBPR team and facilitators is an important step to further understand the program and directions for change.

Ten Steps

34

Chapter 6 Cross Group Differences


Another important goal of program evaluation is to determine how anti-stigma programs differ across key groups. When we say key groups, we generally mean demographics including gender, ethnicity, age, marital status, education, annual household income, and work status. Appendix 2E has a sheet labeled Information About You, which might be administered as a way to obtain demographic information. Note that we were purposefully over-inclusive to provide opportunities for research participants to identify the broadest range of relevant group differences. Depending on the situation, the CBPR team might decide to omit a few items or include even more, such as military service, physical health problems, and police arrest history. Examples of group comparisons that are especially important are comparing gender, male to female, or ethnicity, such as European American to African American. We use differences between African and European American here as an example though we recognize these are only two of many possible ethnic group differences. For example, group differences in European Americans and Latino Americans may be especially important in some Southwestern United States communities. In this example, we include many of the evaluation decisions in the previous section (IIII) on First Person Stories, Inc., targeting teachers in the faculty dining room. We proffer the same CBPR team (e.g., Jane responsible for the overall evaluation). It is essential here to make sure the CBPR group is diverse. Clearly this means stakeholders from different backgrounds and ethnicities, such as African and European Americans. But even within the idea of stakeholder are some interesting possibilities. For example, perhaps the CBPR team will specifically decide to recruit ministers and other members of faith communities from the African American community. Noticeably different in the evaluation plan would be answers to Questions IV, highlighted in the textbox. Ethnic differences are the primary focus of the approach; do Blacks and Whites differ in their reactions to the anti-stigma program? More useful, however, are answers to the second question. What characteristics account for the group differences? Answers to that question yield specific directions for revising the approach.

Question(s)

Examining change due to anti-stigma program

IV

How does First Person Stories, Inc affect African Americans versus European Americans? What are those differences? We use the same measures in this group difference evaluation as those in the previous chapter: dangerousness, fear and avoidance. Defining the comparison group is perhaps the key decision of the evaluation examining African and European American participants (see Section V on the next page). In more traditional program evaluations, comparison refers to the anti-

Ten Steps

35

stigma program versus a control or some other group. Differences between ethnic groups are the questions of interest here; hence, the SAME anti-stigma program is used for both ethnic classes. This point is highlighted in Section V. Once again, the experienced researcher might argue that these data should be collected at pre AND post-test. Although this may be important in the most rigorous of designs, use of post-test data only may also yield important findings. Comparison Group
OVER TIME: yes? __ __ ___pre _no____ __ ___post _____follow-up American______________ _______ ________________
number of days from post to follow-up

V
_____ ACROSS GROUPS: yes? __yes_____ Is this a wait-list control group yes?

Name of comparison group(s) _______African American _____________ _______European Note that for across group data, measures are collected once, at posttest.

Unclear here is whether the individual anti-stigma program under evaluation would be provided as a mixed group of participants (African American AND European American) or a relatively homogeneous group (all African American or all European American). There are pros and cons of mixed versus single groups which the CBPR team might wish to consider. Mixed groups may parallel real world situations where recruiting for such groups is made more difficult by attempting to identify people of similar ethnic backgrounds. Presenting to single ethnic groups, however, may increase the race-related positive effects of the program. For example, participants may be more forthright about stigma and stigma change when their group is populated solely by people of their ethnicity. In our example here, groups receiving First Persons Stories, Inc. were mixed. The ethnic group comparison design changes the appearance of the data Table in Section VII (see the next page). The blacked-out columns reduce possible comparisons from three per measure to two per measure; African versus European Americans. As in the previous example, the last row of the Table lists the average of responses for the 25 subjects.

Ten Steps

36

VII
Measure 1_dangerousness__ Grp 1 Grp 2 Grp 3 ? _______ African European American American 5 3 6 4 7 9 7 2 5 6 4 2 5 3 7 4 7 3 7 8 8 5 7 7 8 2 6 4 7 3 4 4 5 3 5 3 6 4 5 3 6 4 5 5 7 7 8 9 7 3 6.16 4.40 Measure 2_fear_______ Grp 1 African American 5 3 7 5 7 2 4 5 3 4 6 5 4 5 7 3 2 6 9 4 8 6 5 4 5 4.96 Grp 2 _______ European American 4 6 5 2 8 7 4 6 3 4 7 5 6 5 1 3 7 6 4 5 6 3 8 4 7 5.04 Grp 3 ? Measure 3_avoidance__ Grp 1 African American 4 3 6 5 8 2 3 5 3 6 1 1 3 5 2 4 3 5 5 3 4 2 8 8 4 4.12 Grp 2 _______ European American 6 8 7 8 5 7 6 9 8 7 6 5 4 7 8 6 7 8 7 5 6 7 6 6 8 6.68 Grp 3 ?

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Average

Means are then entered into the bar graphs (shown for dangerousness on the next page). It seems like the African American response to dangerousness after the anti-stigma condition is higher than for the European American group. The table that accompanies the graph addresses this question more rigorously. Group differences are circled in the Table consistent with the type of design in the study. Division of differences by 20 is positive and greater than one, which suggests European Americans benefited from the intervention more than African Americans; more specifically, they agreed with the idea of dangerousness less than their African American counterparts. Though not provided here, analyses of the difference between the two groups for fear was not significant. The difference for avoidance was greater than one and negative, suggesting that the African American audience endorses ideas of avoidance less than the European American group.

Ten Steps

37

Measure 1___dangerousness_______
COMPARISON IS TIME OR IS GROUP

Hi____ M E A S U R E # ___ 0 Pre Post F-Up

Hi _10___ M E A S U R E # _2_ 0 Grp 1African American

VIIa

Grp 2 European American

Is this difference significant and meaningful? Group _X__ differences Grp 1 Grp 2 = Grp 1 Grp 3 = Grp 2 Grp 3 = Time __ Differences Pre Post = Pre F-up = ratio= differences 2 1.76 If >1.0 significant (*) If >-1.0 significant (#)

Making Sense of the Data (in Section VIII on the next page) is markedly different when examining differences in ethnicity versus those in intervention versus control groups; specific group differences are assessed. Note that in the textbox for Section VIII, differences between African American (Afr.Am) and European American (Eur.Am) are listed by measure. For dangerousness, the difference between groups was positive, so an asterisk (*) is entered suggesting African Americans endorsed dangerousness more than European Americans. No significant difference was found for differences related to fear, so a zero (0) is entered. European Americans significantly endorsed avoidance more than African Americans, so a pound sign (#) is entered. Responses are totaled at the bottom of the box. Note that European Americans showed better outcomes than African Americans on one attitude (dangerousness) and African Americans better than European Americans on another (avoidance). This leads to the next question: what is it about the anti-stigma program that leads to these differences? One way we answer questions about what component is important to change? is by examining fidelity; what components facilitators used in the actual presentation of the program. This kind of fidelity analysis does not work in a study with mixed groups (both African American and European American). Another way to determine components relevant to specific

Ten Steps

38

intervention, however, is completion of the Satisfaction with Program form and other related forms (starting on the next page). The list of components from the Fidelity Checklist is

Ten Steps

39

Making Sense of the Data:


For Measure 1: __dangerousness___________ Afr.Am > Eur.Am Eur.Am > Afr.Am Eur.Am Afr.Am pos (*) __*__ neg (#) ____ none ____

VIII

For Measure 2: __fear___________ Afr.Am > Eur.Am pos (*) ____ Eur.Am > Afr.Am neg (#) ____ Eur.Am Afr.Am none __0_

For Measure 3: __avoidance___________ Afr.Am > Eur.Am pos (*) ____ Eur.Am > Afr.Am neg (#) __#__ Eur.Am Afr.Am none ____

Afr.Am > Eur.Am Eur.Am > Afr.Am

total ___1___
___1___

reviewed to omit those believed not to be necessary to conduct the program (this process is summarized in more detail earlier in the Guidebook). In an earlier example, the Satisfaction with Program form included ten items which we repeat here as an example. Research participants are again instructed to rate each of the items on a 7-point satisfaction scale.
SATISFACTION WITH PROGRAM form
Enter specific items here Satisfaction Rating -- On the way down stories 4 -- On the way up stories 3 -- Stories of hope 2 -- Stories of recovery 3 -Invites comments from program 1

7 5 4 2 3

Participants -- Asks questions to stimulate conversation -- Assign some kind of selfmonitoring task. -- stress experiences in county jail -- review homeless history -- discuss bad experiences with treatment

Ten Steps

40

Ten Steps

41

Completed program satisfaction ratings are summarized in the tally sheet, but this sheet is set up differently. The sheet tracks items rated satisfactorily (greater than 5) separately for research participants who were African American versus European American. Differences between the two ethnic groups (Column II Column I) are entered into the table followed by a ratio (the difference divided by total number of research participants). The ratio should be circled in cases where the absolute value of said ratio is greater than 0.25.
Enter specific items here

Column I: SATISFIED
Enter one tick for each research participant who is African American.

Column II: SATISFIED


Enter one tick for each research participant who is African American.

Subtract:
Column II minus Column I.

Ratio:
Difference divided by N (_50_). Circle if absolute value of ratio > .xx

-- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Invites comments from program participants -- Asks questions to stimulate conversation -- Assign some kind of selfmonitoring task. -- stress experiences in county jail -- review homeless history -- discuss bad experiences with treatment

/////// ///////// ///////////// //////////////// /// ////////////// // ////// ////////

///////// /// / / //////////// /// ////////////////// //////// ////////////////// ////////

2 -6 -12 -15 9 -11 18 2 18 0

0.04 -0.12 -0.24 -0.30 0.18 -0.22 0.36 0.04 0.36 0

Ten Steps

42

Circled items are finally entered into the To Do list. Items in the To Do list direct the CBPR team towards components that may need to be modified (or facilitators trained) in order for those components to better reflect the interests of one of the ethnic groups. Items in the To Do list in no way suggest the corresponding component is especially troublesome for an ethnic group, only that that component should be considered by the CBPR team in order to modify the program or train facilitators.

TO DO LIST:
Components that differ across groups: African American versus European Americans.

Stories of recovery. Assign self-monitoring task. Review homeless history.

Ten Steps

43

Appendix 1
continued

Appendices A thru F
for the

User-Friendly GUIDE
of the Ten Steps to Evaluate Programs that Erase the Stigma of Mental Illness

A. Ten Steps for Anti-Stigma Program Evaluation Plan B. Attitudes Scale (Instrument and Scoring Keys) Public Stigma Self-Stigma C. Fidelity Assessments Contact Education D Satisfaction with Program E. Information About You F. Summary Directions

Ten Steps

44

Appendix 1A

Ten Steps for Anti-Stigma Program Evaluation Plan (ASPEP-10)


PUBLIC public SELF self

Type of Stigma one) What is the(check stigma?

What is the anti-stigma program?


Is there a manual for the program? Does it have a fidelity measure? Yes____ No____ Name of manual_________________ Yes____ No____ If no, one will need to be developed also need Satisfaction with Program form

WHO IS THE TARGET OF THE PROGRAM?

II

- for public stigma, possible targets are the general public, high school students, employers_______________ - for self-stigma, targets are usually people with mental illness _____________________________________ How many targets will participate in the study (at least 25 per group) _________________________

Where will the program be provided?________________________________________________________ When are the program and its evaluation? (month/day/year)
START_____________ FINISH____________ Start and Finish should include baseline, post-test and follow-up assessments when appropriate. ______________________________________________________________________________________ III

WHO

is the CBPR team

is responsible for the overall evaluation by defining the questions and hypotheses? _________________ is going to conduct the anti-stigma program(s)? ______________________________________________ is going to collect the outcome data and enter into a computer file? ______________________________ is going to collect the Fidelity and Satisfaction data?___________________________________________ is going to analyze the data?_______________________________________________________________ is going to make sense of the analyses?______________________________________________________

Ten Steps

45

Question(s)

Examining change due to anti-stigma program

IV

Good Measures and Design Comparison Group


Name of instrument(s) to examine impact of anti-stigma program

V VI

OVER M1?___________________________________________ TIME: yes? _______ ACROSS GROUPS: yes? _______ _____pre Is this a wait-list control group yes? ______ M2?___________________________________________ _____post Name of other comparison group(s) M3?___________________________________________ _____________________________________ _____follow-up _________________________ Note that for across group data, number of days from post to follow-up measures are collected once at posttest.

HOW WILL FOLLOW-UP DATA BE GATHERED (check one). ____ in person ____ on line Contact information for follow-up info (check one). _____ Phone number ______e-mail address ____ by phone ____ by mail _____Street Address

VII

Table
- Type all scores into the table on the next page for up to three groups (Grp 1, Grp 2, Grp3) or up to three times (Pre/Post/Follow-up). - Determine the average of scores in each column and enter into the bottom row.

Question(s)

Examining change due to anti-stigma program

V IV VI

Measure 1 Grp 1 Grp 2 Grp 3 ? _______ Pre Post F-up ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 average

Measure 2 Grp 1 Grp 2 Grp 3 ? _______ Pre Post F-up ?

Measure 3 Grp 1 Grp 2 Grp 3 ? _______ Pre Post F-up ?

Ten Steps

46

Measure 1_____________________ COMPARISON IS TIME Hi____ M E A S U R E # ___ 0 Pre Post F-Up

OR
M E A S U R E # __

IS GROUP Hi ____

[GRP 1 is always anti-stigma program]

VIIa

0 Grp 1 anti-stigma Grp 2_______


name

Grp 3_______
name

Is this difference significant and meaningful? Group ___ Time ___ ratio= differences Differences differences 2 Grp 1 Grp 2 = Pre Post = Grp 1 Grp 3 = Pre F-up = Grp 2 Grp 3 = Post F-up =

If >1.0 significant (*)

If >-1.0 significant (#)

Ten Steps Measure 2_____________________ COMPARISON IS TIME Hi____ M E A S U R E # ___ 0 Pre Post F-Up

47

OR
M E A S U R E # __

IS GROUP Hi ____

[GRP 1 is always anti-stigma program]

VIIb

0 Grp 1 anti-stigma Grp 2_______


name

Grp 3_______
name

Is this difference significant and meaningful?


Group ___ Time ___ Differences differences Grp 1 Grp 2 = Pre Post = Grp 1 Grp 3 = Pre F-up = Grp 2 Grp 3 = Post F-up = Measure 3_____________________ COMPARISON IS TIME Hi____ M E A S U R E # ___ 0 Pre Post F-Up ratio= differences 2 If >1.0 significant (*) If >-1.0 significant (#)

OR
M E A S U R E # __

VIIc
[GRP 1 is always anti-stigma program]

IS GROUP Hi ____

0 Grp 1 anti-stigma Grp 2_______


name

Grp 3_______
name

Is this difference significant and meaningful?


Group ___ Differences Grp 1 Grp 2 = Grp 1 Grp 3 = Grp 2 Grp 3 = Time ___ Differences Pre Post = Pre F-up = Post F-up = ratio= differences 2 If >1.0 significant (*) If >-1.0 significant (#)

Making Sense of the Data:

VIII

Ten Steps

48

For Measure 1: ______________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) ____ neg (#) ___ none____ pre to f-up pos (*) ____ neg (#) ___ none____ post to f-up pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ _____ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ For Measure 2: ______________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) ____ neg (#) ___ none____ pre to f-up pos (*) ____ neg (#) ___ none____ post to f-up pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ___ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ___ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ For Measure 3: ______________________ Anti-stigma program showed significant change (CHECK ONE) pre to post pos (*) ____ neg (#) ___ none____ pre to f-up pos (*) ____ neg (#) ___ none____ post to f-up pos (*) ____ neg (#) ___ none____ add all for a subtotal ______ ______ ______ Anti-stigma program showed significant change Grp 1 to Grp 2 pos (*) ____ neg (#) ____ none____ Grp 1 to Grp 3 pos (*) ____ neg (#) ____ none____ Grp 2 to Grp 3 pos (*) ____ neg (#) ____ none____ add all for a subtotal ______ ______ ______ add all subtotals TOTAL

Making Sense of Fidelity and Satisfaction Data


Best and Worst from Fidelity Checklist

IX

Best
_1_________________________________ _2_________________________________ _3_________________________________ Worst _1________________________________ _2________________________________ _3_________________________________ Satisfactory-Unsatisfactory Program Components Satisfactory _1_________________________________ _2_________________________________ _3_________________________________ Unsatisfactory _1_________________________________ _2_________________________________ _3_________________________________

TO DO LIST:
replace anti-stigma program: check if yes
(review for alternative programs)

MODIFY program based on fidelity and satisfactory/unsatisfactory ____________________________ ____________________________ __________________________ ____________________________ ____________________________ __________________________

TEACH facilitators based on fidelity and satisfactory/unsatisfactory ____________________________ ____________________________ __________________________ ____________________________ ____________________________ __________________________

Ten Steps

49

Appendix 1B Public Stigma measure and score sheet (AQ-9) Self-Stigma measure and score sheet (ES-5)

Generic measure for three items or less

Ten Steps 50 AQ-9 ID Number_____________________________* Harry is a 30 year-old single man with schizophrenia. Sometimes he hears voices and becomes upset. He lives alone in an apartment and works as a clerk at a large law firm. He had been hospitalized six times because of his illness. Below are nine statements about Harry, on a nine point scale where 9 is very much. Write down how much you agree with each item. Please place your answer using the 9-point scale below. ________________________________________________________ 1 2 3 4 5 6 7 8 9 None at all very much ______1. I would feel pity for Harry. ______2. How dangerous would you feel Harry is? ______3. How scared of Harry would you feel? ______4. I would think that it was Harrys own fault that he is in the present condition. ______5. I think it would be best for Harrys community if he were put away in a psychiatric hospital. ______6. How angry would you feel at Harry? ______7. How likely is it that you would not help Harry? ______8. I would try to stay away from Harry. ______9. How much do you agree that Harry should be forced into treatment with his doctor even if he does not want to?

*We are assigning confidential ID numbers only as a way to track the data. We in no way will attach your name to the number nor will we attribute any of the data or responses to you in particular.

Ten Steps

51

The AQ-9 Score Sheet


Research participants ID No. _______________________________ date____________
_____ Blame is represented by Item 4. _____ Anger is represented by Item 6. _____ Pity is represented by Item 1. _____ Help is represented by Item 7. _____ Dangerousness is represented by Item 2. _____ Fear is represented by Item 3. _____ Avoidance is represented by Item 8. _____ Segregation is represented by Item 5. _____ Coercion is represented by Item 9.

The higher the score, the more that factor is being endorsed by the subject.

Ten Steps ES-5 ID Number_____________________________*

52

Instructions: Below are several statements relating to ones perspective on life and with having to make decisions. Please write the number that is closest to how your feel about the statement. Indicate how you feel now. First impressions are usually best. Do not spend a lot of time on any one question. Please be honest with yourself so that your answers reflect your true feelings. Please place your answer using the 9-point scale below. __________________________________________________ 1 2 3 4 5 6 7 8 9 Very much none at all 1.______I can pretty much determine what will happen in my life. 2.______I generally accomplish what I set out to do. 3.______People have the right to make their own decisions, even if they are bad ones. 4.______People have no right to get angry just because they dont like something. 5.______I rarely feel powerless.

*We are assigning confidential ID numbers only as a way to track the data. We in no way will attach your name to the number nor will we attribute any of the data or responses to you in particular.

Ten Steps

53

The ES-5 Score Sheet


Research participants ID No. _______________________________ date____________ Self-Esteem/Self-Efficacy is represented by Item 2. Power/Powerless is represented by Item 5. Community Activism/Autonomy is represented by Item 3. Optimism/Control over the Future is represented by Item 1. Righteous Anger is represented by Item 4. ______ ______ ______ ______ ______

The lower the score, the more that factor is being endorsed by the subject.

Rogers, E.S., Chamberlin, J., Ellison, M.L., & Crean, T. (1997). A consumer-constructed scale to measure empowerment among users of mental health services. Psychiatric Services, 48, 1042-1047.

Ten Steps

54

Attitudes Sheet
ID Number_____________________________* Answer the following three items on the 9 point scale. Please write your answer using the 9-point scale below. __________________________________________________ 1 2 3 4 5 6 7 8 9 None at all very much ________M 1: __________________________________________________
(for examiner, enter item verbatim from AQ-9 or ES-5 listed on subsequent pages)

________M 2 : __________________________________________________
(for examiner, enter item verbatim from AQ-9 or ES-5 listed on subsequent pages)

________M 3 : __________________________________________________
(for examiner, enter item verbatim from AQ-9 or ES-5 listed on subsequent pages)

*We

are assigning confidential ID numbers only as a way to track the data. We in no way will attach your name to the number nor will we attribute any of the data or responses to you in particular.

Appendix 1C
FIDELITY CHECKLIST
Date checklist was completed: Completed by whom:

Ten Steps

55

EDUCATION _______________ (name of program)


Check X if observed

Check X if observed

GENERIC COMPONENTS
Introductions -- Name of facilitators and program -- Purpose of meeting -- Personal goals RATIO Evaluation -- Explain the need for pre-test measure -- Obtain permission to participate -- Administer pre-test before program begins RATIO Teaches facts -- on illness, symptoms, course, and cause -- on hope and self-determination -- on effective biological treatments -- on effective psychosocial treatments RATIO Teaches myths -- Dangerousness -- Blame -- Competence -- Benevolence -- Contrasts myths with facts RATIO Changing targets -- Identifies discriminatory behavior -- Identifies affirming behavior -- Develops plan against discrimination and for affirming behavior RATIO Label avoidance -- Explains low use of services even when in need -- Attributes this to stigma -- Identify stigma that lead to label avoidance RATIO Self-stigma -- Discuss empowerment -- Discuss self-determination -- Discuss hope -- Make plan to lessen self-stigma and improve empowerment RATIO Follow-up and homework -- Assign some kind of self-monitoring task. -- Inform participant of time and place where HW will be discussed/reviewed -- Obtain information to seek out participant for follow-up. RATIO Conclusion -- Summarize key points of program RATIO Post-test -- hand out posttest RATIO

COMPONENTS SPECIFIC TO THIS ANTI-STIGMA PROGRAM


Introductions PLUS ----

= x/3

= x/? Evaluation PLUS ---= x/? Teaches facts PLUS ----= x/? Teaches myths PLUS -----= x/? Changing targets PLUS ---= x/? Label avoidance PLUS ---= x/? Self-stigma PLUS ----= x/? Follow-up and homework PLUS ---= x/? Conclusion PLUS -= x/? Post-test PLUS -= x/?

= x/3

= x/4

= x/5

= x/3

= x/3

= x/4

= x/3

= x/1

= x/1

Ten Steps

56

FIDELITY CHECKLIST
Date checklist was completed:

CONTACT_________________ (name of program)


Completed by whom:

Check X if observed

GENERIC COMPONENTS
Introductions -- Name of facilitators and program -- Purpose of meeting -- Personal goals RATIO Evaluation -- Explain the need for pre-test measure -- Obtain permission to participate -- Administer pre-test before program begins RATIO Stories of facilitator 1 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO Stories of facilitator 2 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO Stories of facilitator 3 -- On the way down stories -- On the way up stories -- Stories of hope -- Stories of recovery -- Stories of good treatments RATIO Discussion -- Invites comments from program participants -- Asks questions to stimulate conversation -- Reflects back comments -- Refer to facilitator stories to illustrate issues RATIO Follow-up and homework -- Assign some kind of self-monitoring task. -- Inform participant of time and place where home work will be discussed/reviewed -- Obtain information to seek out participant for follow-up. RATIO Conclusion -- Summarize key points of program RATIO Post-test -- hand out posttest RATIO

Check X if observed

COMPONENTS SPECIFIC TO THIS ANTI-STIGMA PROGRAM


Introductions ---PLUS

= x/3

= x/? Evaluation ---= x/?

RATIO PLUS

= x/3

= x/5

= x/?

= x/5

= x/?

= x/5

= x/?

= x/4

= x/?

RATIO Stories of facilitator 1 PLUS -----RATIO Stories of facilitator 2 PLUS -----RATIO Stories of facilitator 3 PLUS -----RATIO Discussion PLUS ----RATIO Follow-up and homework PLUS ----

= x/3

= x/?

= x/1

= x/?

RATIO Conclusion PLUS -RATIO Post-test PLUS --

= x/1

= x/?

Ten Steps

57

Appendix 1D
SATISFACTION WITH PROGRAM form

Ten Steps

58

Name or ID Number_________________________ * Using the satisfaction scale, rate your satisfaction or how pleased were you with the following components of the program.
Very unsatisfactory Very satisfactory

3 Satisfaction Rating

Enter specific items here

Ten Steps

59

*We are assigning confidential ID numbers only as a way to track the data. We in no way will attach your name to the number nor will we attribute any of the data or responses to you in particular.

Ten Steps SATISFACTION WITH PROGRAM Tally Sheet

60

Enter specific items here

SATISFIED
Enter one tick for each research participant who rated the item greater than 5.

RATIO
Divide satisfied by total N (_25_); circle if greater than .75

DISSATISFIED
Enter one tick for each research participant who rated the item less than 3.

RATIO
Divide dissatisfied by total N (_25_); highlight if greater than .75

Appendix 1E
Information About You
The CBPR team may decide to add, omit or modify items for the goals of their study.

Ten Steps

61

ID Number_____________________________* Providing this information will yield a more complete evaluation of the anti-stigma program. Gender: male____ female ____ Age:____ _____ Asian/Asian American _____ Hispanic/Latino _____ Pacific Rim

Ethnicity (check all that apply) _____ African/African American _____ European/European American _____ Native American _____ Other ___________________ Marital Status (check the best answer) _____ Single _____ Separate/Divorced

_____ Married/Partnered _____ Widowed

Completed Education (check the best answer, leave blank if no HS education) _____ High school/GED diploma _____ Some college _____ Certification (e.g., day care technician) _____ Associates degree _____ BA/BS _____ MA/MS _____ PhD/professional degree (e.g., JD, MD) Annual Household Income (check the best answer) _____ $00,000 $20,000 _____ $20,000 $40,000 _____ $40,0000 $60,000 _____ $60,000 $80,000 _____ $80,0000 $100,000 _____ $100,000 $120,000 _____ > $120,00 Current Work Status (check the best answer) _____ Unemployed/searching for work _____ Part-time work _____ Volunteer work Sexual orientation _____ Heterosexual _____ Bisexual _____ Asexual _____ Unemployed/satisfied with situation _____ Full time work

_____ Homosexual _____ Transgender _____ Other __________________

Religious and/or spiritual affiliation (check all that apply) _____ Christianity _____ Judaism _____ Islam _____ Buddhism _____ Hinduism _____ Secular humanism _____ Other __________________

*We

are assigning confidential ID numbers only as a way to track the data. We in no way will attach your name to the number nor will we attribute any of the data or response to you in particular.

Appendix 1F
Summary Directions

Ten Steps

62

Step-by-step directions are provided here to successfully use and complete the different parts of the Guide: The Ten Step Anti-Stigma Program Evaluation Plan (ASPEP-10) Fidelity Checklists, and Satisfaction with Program form.

Ten Steps

63

Ten Step Anti-Stigma Program Evaluation Plan


ASPEP-10

I. What is the anti-stigma program?


Indicate (check) whether the anti-stigma program targets public or self-stigma. Enter the name of the program. Specify whether a manual and/or fidelity measure already exist.

II.

Indicate the target of the anti-stigma program. Specify for whom the anti-stigma program is targeted. This clearly differs for self versus public stigma. Where specifically will the program be presented? What physical space will be provided? Enter the day and time when the program will be offered. Name the various stakeholders that will regularly and activity participate in the CBPR team. Space for eight names is provided, but additional names should be written on separate paper. Specific CBPR team assignments are needed to make sure the various components of the research process are accomplished. -- Responsible for evaluation -- Conduct the anti stigma program -- Collect data and enter in file -- Collect fidelity/satisfaction data -- Analyze the data -- Make sense of analyses Write in the fundamental evaluation questions that govern the overall study.

III.

IV. V.

Between one and three measures may be selected for the Evaluation Plan. Specify the names of each measure in the spaces M1-M3. Specify whether the study is a comparison over time or across groups. If over time, is there a follow-up? If across groups, is one group a wait-list control? Write in the third comparison group if appropriate.

VI.

VII. The directions for the table are written here.


Type all scores into the table for the two to three groups (Grp 1, Grp 2, Grp3) or two to three times (Pre/Post/Follow-up). Determine the average of scores in each column and enter it into the last row of the table. Draw the graph for Measure 1. Complete the graphs. There are three graphs, one for each measure. On the vertical axis marked MEASURE 1, the low variable is 0. The high end should be just above the largest average in the table. Then number the marks that divide the vertical axis. On the horizontal axis, list names of times or groups. Draw in each bar by entering the averages taken from the Table under the columns for Measure 1.

VIIa.

Ten Steps Then determine whether the differences are significant and meaningful.

64

Ten Steps 65 Check whether the analysis is group or time. Then determine all the differences that apply. The denominator for the ratio is either pre-test averages divided by two, or the anti-stigma group averages divided by two. The ratio is the difference score divided by the denominator. If the ratio is greater than +1.0, star (*) the cell meaning a good outcome. If the ratio is greater than -1.0, put a pound sign (#) in the cell meaning a bad outcome. Complete graphs and differences for the remaining two measures.

VIIb-c. VIII.

Making sense of the data. Enter the measure names for measures one to three. Values will be listed for either the time (e.g., pre to post) or group (e.g., grp1 to grp 2) Put a asterisk (*) in each space that yielded a positive and significant finding. Put a pound sign (#) that corresponds with each space that yielded a negative and significant finding. Put a zero (0) in the space labeled none where neither * nor # were found. Get a subtotal for positives, negatives, and neutrals for the three measures. Add up the subtotals in an overall set of total scores.

IX.

Best and worse ratios from the Fidelity Assessment are listed here. Components with ratios higher than 80% are listed in the Best spaces. Those lower than 33% are written out in the Worst lines. Also enter satisfactory and unsatisfactory program components. Components with the top three satisfaction ratings are entered on the appropriate lines. Those with the lowest ratings are entered on the remaining three lines.

X.

Putting it all together as a to do list. Given the outcome data, check if components of the anti-stigma program should be replaced. This is especially important when negative significant differences are found for two of the three analyses by measure. List all items from the Fidelity Checklist with the worst ratios. Also list items with especially high ratios. Sort those items into Modify and/or Training tasks. List three items with the worst scores from the Satisfaction measure and those items rated more highly. Sort these items into Modify and/or Training tasks. The CBPR team considers this list of program components to strengthen the anti-stigma program.

Ten Steps

66

Fidelity Checklists
Use the Fidelity Checklist for either an education or contact program. Add any components specific to the anti-stigma program. Unobtrusively view the program facilitator during the actual presentation. Check those components that the facilitator shows during presentation. Determine the ratios for all sets of generic and specific-to-program components included in the program fidelity assessment. Circle ratios that are higher than 80%, and highlight ratios that are lower than 33%. Enter the findings in Section IX.

Satisfaction with Program


Select the most important components from the Fidelity Checklist to make up the Satisfaction with Program form. Do not include more than 10 items. Instruct research participants to complete the form after the anti-stigma program is finished. On the Satisfaction tally, tick for each research participant who rated an item positively (item > 5) or negatively (item < 3). Items with the most and least ticks are entered into the appropriate space on Section IX.

Você também pode gostar