(282d) Developing Quantifiable Assessments in Advance of An ABET Visit: University of Connecticut | AIChE

(282d) Developing Quantifiable Assessments in Advance of An ABET Visit: University of Connecticut

Authors 

Burkey, D. D. - Presenter, University of Connecticut
Mustain, W. E., University of Connecticut



In advance of UConn’s planned ABET visit in the fall of 2013, the university held a mock review in the spring of 2012. Data collection was carried out as it had been previously, and a draft report compiled such that the mock evaluation team could provide feedback in advance of the actual report. While the chemical engineering program received a six-year accreditation during the 2006-2007 academic year, the mock evaluators pointed out that most of the assessment being carried out was subjective in nature with an over-reliance on survey data and self-reporting of performance by students and faculty.

In response to this criticism, the departmental ABET committee examined the best practices as seen in various presentations of the AIChE Education Division, as well as the ASEE Chemical Engineering Division, and developed a new, quantitative system for performing assessment and providing a formal method for continuous improvement at the course and program level. At the program level, the ABET a-k criteria are mapped to various courses, such that all of the a-k outcomes are covered by the curriculum. This mapping is reviewed on a yearly basis at the faculty retreat. At the course level, faculty are now mentored on how to write student outcomes for their class that can be quantitatively assessed by mapping them to various problems and assignments that demonstrate competence in these areas. Additionally, the outcomes are mapped to the ABET a-k criteria that were identified for that class at the retreat. Thus, we are in the process of shifting to an outcomes-based curriculum, in which all assignments can be mapped to a desired outcome for the course, and the student attainment of the outcomes can be ascertained through the grading of these assignments. Faculty were also mentored in rubric generation, such that assignments can be fairly and uniformly evaluated should the instructor change over time. Graduate teaching assistants were employed to help document and implement the system, such as to reduce the faculty burden on data collection once the initial selection of problems and mapping were completed. 

Thus, after the initial investment in developing assessable student outcomes and performing the initial mapping of outcomes to problems and a-k criteria, a quantitative assessment of student performance was obtained that not only addressed the previous criticism of a largely subjective assessment of student learning, but was also much more useful in identifying specific problem areas within individual courses that the faculty could address in future offerings, directly addressing the concept of continuous improvement. Under discussion for future implementation are focus groups in the core areas (transport, thermodynamics, and reaction engineering) to evaluate the courses in those areas and find synergies, identify and deploy best practices, mentor new faculty, and provide a further mechanism for continuous improvement.