A Brief Overview of the Development of the CSLO Rubrics

Presented to the President’s Cabinet—Jan. 21, 2008

2003

  • CSLOs were drafted as part of a Title III summer initiative and presented at MOM.

2003-2004

  • Each school sent representatives to a SLOA sub-group which created statements to define and clarify the CSLOs.
  • These statements were forwarded to the Quality Student Learning Council and became part of each course syllabi at the college.

2005-2007

  • The assessment committee reviewed and considered various ways to measure student learning at the institutional level. The committee decided to pursue the Artifact Model and devise our own rubrics as part of the process.
     
  • During the fall 2005, a sub-committee drafted the first rubric for THINK by reviewing and drawing from rubrics from different institutions and sources.
    • We determined a number of characteristics for each CSLO and devised a range of scoring levels.
    • This draft rubric was presented at the Assessment Retreat and at a CTX conversation on assessment; the rubric was also shown at the HLC Assessment Seminar in Lisle, IL and received support from mentors and participants.
    • Early piloting of the draft proved favorable.
       
  • During the fall 2006, sub-groups of the assessment committee drafted rubrics for the other four CSLOs: ACT, COMMUNICATE, LEARN, and INTEGRATE.
    • Dec. 2006, the assessment committee met to pilot all five rubrics with a selection of artifacts submitted from a range of disciplines at the college.
    • May 2007, the committee met again to pilot the rubrics with more sample artifacts.
       
  • During the fall 2007, a sub-group of the assessment task force reviewed and revised the rubrics based on our experiences and observations from the December and May sessions.
    • Numerical scores were added to facilitate collection and reporting of data.
    • Committee members met with some departments to share the rubrics.
    • Another pilot occurred in December 2007.

Some Findings

  1. The rubrics have, in general, worked well in assessing student artifacts.
  2. Group consensus has been strong in scoring the artifacts.
  3. Instructors have learned many ways to take feedback from the artifact reading and apply it to improving their teaching / their students’ learning.
  4. We are finding ways to identify which assignments work better as artifacts.