Using Evaluation Rigor to Improve Student Academic Writing and Faculty Scoring of Assignments

Monday, 18 November 2013

Dolores Zygmont, RN, PhD
College of Health Professions and Social Work, Department of Nursing, Temple University, College of Health Professions, Philadelphia, PA

Learning Objective 1: Describe the process for developing a scoring rubric for written assignments

Learning Objective 2: Discuss an inter-rater reliability process to increase integrity of scores for written assignments

It is well documented in the literature that the quality of written assignments submitted by students has been declining over the past several decades.  In addition, when students are graded based on the actual quality of their submissions, faculty are acused of 'being too tough', being unfair', 'picking' on the student.  A review of writing assignments, how they were scored, and how students interpreted the scores was conducted.  What was evident from the students was a lack of certainty regarding: (1) what was required in the different sections of the assignment criteria, (2) how the grades were assigned, (3) whether grades were assigned fairly.  In reviewing the assignment criteria, broad topic areas were used with little direction and scroing rubrics included individual sections worth 40% of the assignment.  Students were unable to complete the assignments satisfactorily because they were unsure what was satisfactory and faculty had few specific criteria to guide assignment of grades so their was little certainty that there was real equity in grade assignments.  A group of faculty chose to focus on written assignments and how best to help students learn from the assignment, how to improve their academic writing skills, and how to ensure the integrity of the scores of the written assignments.  The group rewrote assignment criteria, selected a model for use for all scoring rubrics, and developed a scoring rubric for each major assignment in the course.  Finally, to ensure consistency is scoring and clarity of the rubric, the faculty group performed inter-rater reliability for each major paper in the courses.  The goal was to obtain a reliability coefficient of 0.8.  This presentation will share the evolution of the criteria, rubrics, and IRR scores during the  project period.