By Greg Griffin, Ph.D., AICP
Dec. 10, 2021
Scholars and journalists have long argued that student evaluations can be unfair and unrelated to student learning. I suggest we stop the hand-wringing and apply some tricks from our friends in program administration and marketing research. This post shows one method of applying importance-performance analysis and reflects how I am using this approach to improve student learning.
This fall I started the semester's History and Theory of Urban and Regional Planning with an interactive syllabus that breaks apart components of the syllabus in the form of an online survey. The notion is that active engagement with the content improves understanding of the critical course components. After working with importance-performance analysis in my public engagement practice and scholarship, I thought it might be helpful to evaluate the stated course outcomes from the syllabus.
Here's how I integrated course evaluation in the front and back ends of the course:
As the first assignment, the interactive syllabus asked students to rank the four course outcomes by "how important they are to you." This importance portion of the analysis and comes before engaging with course materials.
The last assignment was also delivered as an online survey and asked students to rank outcomes "according to how well this class performed in supporting your accomplishments" as a measure of performance. Since students had completed the course when I delivered this survey, they could evaluate the performance of the course vis-a-vis the stated outcomes.
I then graphed the averages of each outcome for the importance and performance axes (image below). Since this was a ranking exercise among four levels with scores relative to each other, the average of both axes is precisely 2.5. So, I mark this to create the importance-performance boxes shown below. Simple!
Importance-performance graph of student responses to ranking four course objectives (N=19). Data is available for download from the interactive version.
We see the upper-left quadrant would include important objectives that were not executed well in the course. Student evaluations of the outcomes did not include any in this box, but this would be the priority for improving the course next time. The upper-right quadrant includes outcomes related to describing theories and explaining historical antecedents, which this evaluation indicates I should continue. The lower-left box shows course outcomes that need review. Students indicated they were of lower priority, and the course did not deliver optimally. The lower-right corner shows areas that might have too much attention given to them, which is left blank in this course example.
How should I deal with the "low priority" outcomes to improve the course? I need to look beyond this screening tool to the reflective essay I included as part of the final assignment and the written portions of the university-delivered student evaluations. This approach helps me go beyond the rate-my-professor simplicity to understand why some course outcomes need attention.
How should I deal with the "low priority" outcomes to improve the course? I need to look beyond this screening tool to the reflective essay I included as part of the final assignment and the written portions of the university-delivered student evaluations. This approach helps me go beyond the rate-my-professor simplicity to understand why some course outcomes need attention.
Now I can stop hand-wringing about where to focus on improvements to the course and keep improving each year. How do you keep tabs on course changes and improve learning outcomes?
Greg Griffin is an associate professor and program leader of urban and regional planning at The University of Texas at San Antonio. His research is published in the Journal of the American Planning Association and the Journal of Big Data Analytics in Transportation. He recently served with the Harvard Law School Cyberlaw Clinic on e-scooter privacy for an amicus brief in the Ninth Circuit. greg.griffin@utsa.edu
Back to Top