Você está na página 1de 5

SUMMARY REPORT ON

CURRICULUM DEVELOPMENT
DR.TEOTICIA TAGUIBAO

SUBMITTED BY: MARILOU T.CRUZ, CASTOR G. ACCAD, RENESA B. MAMURI,

, DIVINE GRACE O. TOMAS, MARY ANN V. PITAS

PROGRAMME ASSESSMENT AND FEEDBACK STRATEGIES

• DEFINITION OF ASSESSMENT

• Assessment is the ongoing process of gathering, analysing and


reflecting on evidence to make informed and consistent judgements
to improve future student learning.

• PURPOSES OF AN ASSESSMENT

• Assessment has three main purposes:

Assessment for learning occurs when teachers use inferences about


student progress to inform their teaching. (formative)

Assessment as learning occurs when students reflect on and monitor


their progress to inform their future learning goals. (formative)

• Assessment of learning occurs when teachers use evidence of


student learning to make judgements on student achievement
against goals and standards. (summative)

• Assessment for and as learning occur while students are engaged


in the process of learning, while assessment of learning occurs
at the end of a learning process or task or unit of work or for
reporting at the end of a time period such as a semester.

• PRINCIPLES OF ASSESSMENT

 Principle 1 - Assessment should be valid.

 Validity ensures that assessment tasks and associated criteria


effectively measure student attainment of the intended learning
outcomes at the appropriate level.

 Principle 2 - Assessment should be reliable and consistent.


 There is a need for assessment to be reliable and this requires
clear and consistent processes for the setting, marking, grading
and moderation of assignments.

 Principle 3 - Information about assessment should be explicit,


accessible and transparent.

 Clear, accurate, consistent and timely information on assessment


tasks and procedures should be made available to students, staff
and other external assessors or examiners.

 Principle 4 - Assessment should be inclusive and equitable.

 As far as is possible without compromising academic standards,


inclusive and equitable assessment should ensure that tasks and
procedures do not disadvantage any group or individual.

 Principle 5 - Assessment should be an integral part of programme


design and should relate directly to the programme aims and
learning outcomes.

 Assessment tasks should primarily reflect the nature of the


discipline or subject but should also ensure that students have
the opportunity to develop a range of generic skills and
capabilities.

 Principle 6 - The amount of assessed work should be manageable.

 The scheduling of assignments and the amount of assessed work


required should provide a reliable and valid profile of
achievement without overloading staff or students.

 Principle 7 - Formative and summative assessment should be


included in each programme.

 Formative and summative assessment should be incorporated into


programmes to ensure that the purposes of assessment are
adequately addressed. Many programmes may also wish to include
diagnostic assessment.

 Principle 8 - Timely feedback that promotes learning and


facilitates improvement should be an integral part of the
assessment process.

 Students are entitled to feedback on submitted formative


assessment tasks, and on summative tasks, where appropriate. The
nature, extent and timing of feedback for each assessment task
should be made clear to students in advance.
 Principle 9 - Staff development policy and strategy should
include assessment.

 All those involved in the assessment of students must be


competent to undertake their roles and responsibilities.

• VALIDITY VS. RELIABILITY OF PROGRAMME ASSESSMENT

• VALIDITY

Does the test cover what we are told (or believe)it covers?

To what extent?

Is the assessment

being used for an

appropriate purpose?

• VALIDITY DEFINED

• It measures what it purports to measure.

• It involves the interpretation of a score for a particular


purpose or use (because, a score may be valid for one use but not
another)

• It is a matter of degree, not all-or-none.

• CATEGORIES OF VALIDITY EVIDENCE

1. Face validity- refers to the degree to which a test appears to


measure what it purports to measure.

2. Content Validity- there is a good match between the content of


the test and some well-defined domain of knowledge or behavior.

3. Criterion-Related Validity- to demonstrate the degree of accuracy


of a test by comparing it with another “test, measure or procedure
which has been demonstrated to be valid” (i.e. a valued criterion).

4. Construct validity is based on the accumulation of knowledge about


the test and its relationship to other tests and behaviors.

5. Consequential Validity-- in the real world, the consequences that


follow from the use of assessments are important indications of
validity.

• RELIABILITY
• is the degree to which a test consistently measures whatever it
measures.

 Test-retest Reliability:

 Test-retest reliability is the degree to which scores are


consistent over time. It indicates score variation that occurs
from testing session to testing session as a result of errors of
measurement. Problems: Memory, Maturation, Learning.

 Equivalent-Forms or Alternate-Forms Reliability:

 Two tests that are identical in every way except for the actual
items included. Used when it is likely that test takers will
recall responses made during the first session and when alternate
forms are available. Correlate the two scores. The obtained
coefficient is called the coefficient of stability or coefficient
of equivalence. Problem: Difficulty of constructing two forms
that are essentially equivalent.

 Both of the above require two administrations.

 Split-Half Reliability:

 Requires only one administration. Especially appropriate when


the test is very long. The most commonly used method to split
the test into two is using the odd-even strategy. Since longer
tests tend to be more reliable, and since split-half reliability
represents the reliability of a test only half as long as the
actual test, a correction formula must be applied to the
coefficient. Spearman-Brown prophecy formula.

 Split-half reliability is a form of internal consistency


reliability.

 Rationale Equivalence Reliability:

 Rationale equivalence reliability is not established through


correlation but rather estimates internal consistency by
determining how all items on a test relate to all other items and
to the total test.

 Internal Consistency Reliability:

 Determining how all items on the test relate to all other items.
Kudser-Richardson-> is an estimate of reliability that is
essentially equivalent to the average of the split-half
reliabilities computed for all possible halves.

Você também pode gostar