Section 3: Course Assessment


Symbol Key

 

Computer/Web Learning One or more of the references addresses an online, computer-based, or web-enhanced teaching environment.
Many Applications Best practice applies to a wide range of subject areas and teaching methods.
Collegis
Recommendation
Best practice has been used or observed by Collegis staff, and comes highly recommended.

 

Match learning objectives with assessments.

Morgan and O'Reilly's Assessing Open and Distance Learners contains some useful tips for applying good assessment practices to teaching, especially in the online environment. See Chapter 5: “Designing Assessment Tasks” (pp. 46-62.). A sampling of the points made in this chapter:

  • Types of assessments should be appropriate for the course materials and desired course outcomes.
  • Use quantitative assessment when objective and numeric assessment is needed.
  • Use qualitative assessment when that kind of thinking and writing is needed.
  • For group projects, use both group and individual measures.

McKeachie (1999) also provides some practical strategies for assessing learning. See in particular, Chapter 7, “Testing and Assessing Learning: Assigning Grades is Not The Most Important Function” (pp. 85-110). This chapter contains sections on balancing specified objectives with various types of test items; test construction (describes the strengths and weaknesses of various question types), helping students learn to take tests, grading, helping students and yourself learn from the test, and other methods of assessing learning. Contains references to several studies in very specific areas of testing.

In Chapter 10 of Morgan and O'Reilly's text, they discuss developing assessment policies. (pp.93-97). The chapter encourages making students aware of student appeal procedures, assignment extensions and conditions for special consideration, and grading practices (turnaround time). It also explains plagiarism and the importance of resource citation.

Take precautions to limit the possibility of cheating.

See Chapter 8 in McKeachie (1999): What to do about cheating. (pp. 111-116) McKeachie suggests several ways to minimize student cheating:

  • Relieve pressure by providing multiple opportunities for students to demonstrate mastery of course goals.
  • Create a meaningful assessment that is not overly long.
  • Administer assessment in small group sections where students are known and the student sense of accountability may be greater.
  • Create randomized tests.

According to the Commission on Institutes of Higher Education (CIHE) (2000), high-stakes assessments such as exams should be administered in circumstances that include firm learner identification. If proctoring is used, there should be an established effective procedure for selecting proctors. Available online.

Frye (2000) wrote an article on preventing online learners from cheating. Although this article focuses on corporate training, the issues it raises about online testing are the same as those faced by many instructors. The article suggests the following solutions to cheating problems: online proctored exams, performance testing, daily computer-graded online quizzes, and spot checks. In addition, the article includes a discussion of biometric authentication, including thumbprint, voiceprint, and facial ID. Although these technologies are becoming more prevalent, the article concludes by mentioning that building motivation not to cheat into an online course can be the best means of deterrence.

BACK TO TOP OF PAGE

Communicate assessment tasks clearly.

In Chapter 6 of Morgan and O'Reilly's (1999) text, they discuss communicating assessment tasks. In creating an assessment, provide a rationale, explain terms, offer suggestions for methods of approach, explain any conventions that govern the form of student response, use clear language, be terse, provide clearly defined marking criteria, and be available to answer questions and provide guidance when needed.

Use formative assessment to promote deeper learning; consider alternative forms of assessment such as portfolios.

Askham (1997) conducted a study utilizing ongoing portfolio assessment as a vehicle for formative feedback. The study also incorporated learning journals as a means for reflective learning. Regarding the value of ongoing feedback, the author noted, "Feedback generated progressively makes it much easier to respond to individual and collective problems and students are better placed to identify these" (p. 312). Using portfolio evaluation as the vehicle for formative feedback, the author wrote, "Although marking portfolios can be time consuming and may create additional problems of objectivity, these issues can be addressed by the careful structuring of material to be submitted" (p. 312).

Use self-assessments to improve learning and self-awareness.

Sluijsmans, Dochy, and Moerkerke (1998) conducted a meta review of 62 studies that explored the effectiveness of self-, peer-, and co-assessment. Here is what they concluded about self-assessment:

Self-assessment is primarily used as formative assessment, allowing learners to reflect on their own progress. Generally, weaker students tended to overrate themselves, and good students tended to underrate themselves. Additionally, students who engaged in overt self-assessment during the learning process had a higher percentage of correct responses on exams than those who did not undergo self-assessment. The authors concluded, "Overall, it can be concluded that research reports positive findings concerning the use of self-assessment in educational practice. Students in higher education are well able to self-assess accurately and this ability improves with feedback and development over time. Moreover, students who engage in self-assessment tend to score higher on tests" (p. 300).

See also Chapter 4 in Morgan and O'Reilly (1999): “Online technologies in open and distance assessment” (pp. 33-42) and pp. 170-172 in the case studies section. Traditionally, self- and peer- assessments have not been used widely in distance education, but online technologies have made these types of assessment more feasible. Self-assessment is a valuable life skill and can be encouraged in lifelong learning.

Have students conduct peer-assessments (may be particularly effective when used in conjunction with group work).

As mentioned in the previous item, Sluijsmans, Dochy, and Moerkerke (1998)Morgan and O'Reilly (1999) conducted a meta review of 62 studies that explored the effectiveness of self-, peer-, and co-assessment. Here is what they concluded about peer-assessment:

Peer-Assessment: defining it as groups of individuals who give each other feedback, the authors wrote, "Peer assessment is not only a grading procedure, but also part of a learning process in which skills are developed. Peer assessment can be seen as a part of the self-assessment process and serves to inform self-assessment" (p.300).

They described 3 forms of peer-assessment:

  1. Peer Ranking - each member ranks the others contributions from best to worst
  2. Peer Nomination - members are rated on particular characteristics
  3. Peer Rating - similar to above, but ratings are completed along multiple characteristics

In Chapter 4 of Morgan and O'Reilly (1999), they argue that peer assessment is a valuable skill as well as a contributor to motivation, group effort, and community building. With new online technologies for tracking assessments, peer assessment may be a more do-able part of class activities.

BACK TO TOP OF PAGE

When conducting performance assessments, take into account the role of feedback in short-term versus long-term retention.

Schroth (1997) conducted a study on the effects of frequency of feedback on transfer in concept identification. Findings reveal that although lowering the percentage of feedback trials slowed concept attainment, it facilitated transfer on all transfer tasks. In general, the fewer the number of feedback trials subjects received, the greater the amount of transfer. Results are consistent with other studies that suggest that conditions that make it more difficult for students to initially learn a task may have positive benefits for transfer. These results are in line with previous research demonstrating that “conditions that make it more difficult for subjects in the acquisition phase of a learning task have positive benefits for transfer.” Subjects who received less feedback had slower concept attainment but higher transfer rate. “In general, the fewer the number of feedback trials subjects received, the greater the amount of transfer.” The hypothesis is that greater feedback at the time of knowledge acquisition narrows the focus and may encourage students to use memory rather than higher levels of thinking. This enables a better outcome at the time of knowledge acquisition. Less feedback at the time of knowledge acquisition may encourage focus on concepts and generalities, enabling them to better apply their learning in learning transfer situations.

Evaluate your assessment practices.

Morgan and O’Reilly’s Chapter 11: “Evaluating Your Assessment Practices” (pp. 98-102) lists the following areas to examine:

  • Appropriateness of assessment items within a subject (examples: alignment with objectives, validity, authenticity, frequency, size, diversity)
  • Student response (student perception, motivation, meaningful demonstration of learning)
  • Nature and quality of feedback provided and its contribution to learning
  • Nature and quality of support (tutoring; answering questions)
    It also points out that evaluation is now thought to have to do with satisfaction of student goals that are increasingly diverse, particularly given the diversity of the online learning population.

Conduct a mid-semester and/or end-of-semester course evaluation to collect feedback on the workings of the course.

Marsh (1984) provides an overview of findings and research designs used to study students’ evaluations of teaching effectiveness. Marsh concludes that class-average ratings are:

  • Multidimensional
  • Reliable and stable
  • Primarily a function of the instructor who teaches the course rather than the course being taught
  • Relatively valid against a variety of indicators of effective teaching
  • Seen to be useful by faculty as feedback about their teaching
  • Seen to be useful by students for course selection
  • Seen to be useful by administrators for use in personnel decisions

Marsh also examines the implications of these findings and provides possible directions for future research in this area.

Select appropriate items for your mid-semester or end-of-the semester course evaluation.

What kind of items should be included on a course evaluation? Shatz and Best (1986) conducted a study to determine which items on a course evaluation were considered to be most important by faculty and students. They found that, when asked to rate importance of items from a large pool, students and faculty had substantial agreement. A list of items with their rankings is presented, along with implications for interpreting evaluations.

If you administer a course evaluation, consider answering the questions yourself and comparing them with the responses of students.

You may have wondered whether students’ ratings of the course and your teaching are accurate. Drews, Burroughs, and Nokovich (1987) found that, indeed, faculty self-ratings are significantly correlated with student ratings. Specifically, they found consistency in the areas of material covered, instructor performance, and overall impressions of the success of the class.