(120a) Problem Type, Location, and Difficulty of Auto-Graded Homework for a Material and Energy Balances Course | AIChE

(120a) Problem Type, Location, and Difficulty of Auto-Graded Homework for a Material and Energy Balances Course

Authors 

Liberatore, M. - Presenter, University of Toledo
Yanosko, S., University of Toledo
Valentine, G., University of Toledo
Online homework and interactive textbooks document students’ work and generate big data for both students and faculty to interpret. Here, students’ usage of and success on over 700 auto-graded questions within an interactive tool titled the Material and Energy Balances zyBook will be explored. Real-time auto-grading allows students to know when they have mastered a topic, while teaching assistants and faculty can see both individual and group progress reports without laborious grading, especially for larger classes. Our previous research examined reading participation and auto-graded problems at the course level, which will be reviewed in this presentation. Specifically, median reading participation over 93% for seven cohorts was measured, and median correct on auto-graded problems of 91% or higher for six cohorts was found.

Since auto-graded problems allow unlimited attempts, students can receive feedback and persist, which captures some of the key tenets of deliberate practice. Two recent cohorts’ responses on hundreds of auto-graded questions will be examined with respect to problem type, location, and difficulty. Formative, single calculation problems with scaffolding appear in most sections, and more summative, multi-concept problems appeared at the end of each chapter (following the standard convention in chemical engineering textbooks). Problems requiring numerical answers be within a tolerance will be compared with multiple choice questions. New findings will show median percent correct was high (above 80%) for all problem types. Attempts before correct provides another valuable metric to distinguish between problem types with numeric problems taking more attempts than multiple choice. Finally, a metric combining both correct and attempts, called the deliberate practice score, provides a quantitative aggregate measure. This contribution will build upon a recent ASEE conference paper on this topic.