This week, we asked guest contributor Eric Landrum about using grade information for faculty assessment purposes. Below, he gives informative examples about how he utilizes detailed rubrics as a form of assessment for faculty improvement.
First, some definitions of course, and my definitions may vary a bit from others. Grading is a process by which feedback is provided to students to help improve their performance; assessment is a process by which feedback is provided to faculty to help improve their performance. Are grading and feedback synonymous – not usually, although with careful planning, the same process can serve both grading and assessment purposes. Huh?
Please allow me to use my own example. I teach an undergraduate, 300-level Research Methods course. Each student conducts a survey study, and they end the course with a completed APA manuscript. Early in the semester they write the first draft of their Introduction section, an assignment which is worth 100 points. I grade that assignment, meaning that I provide feedback to each student to help them improve their future performance. If the class average is 86, that is grade data, and not very helpful to me as an instructor. That is, that average grade does not tell me much about how I can improve the next time I teach the course, how I might tweak the assignment instructions to make things clearer, how I might design a better in-class activity leading up to the Introduction section due date, and so on. Grade data will typically not help me improve as an instructor.
However, if I alter my practice a bit, and I use rubrics to evaluate subgoals of the Introduction section assignment, I can get to what I call assessment data. So that is what I did in my Research Methods course. My 100-point assignment was subdivided into three rubric areas, like this:
Students still need to receive grades, and they do, but now the rubric outcomes are my assessment data. Using a learning management system, I can run a “rubric statistics report,” which looks like this:
As you can see (from my real data), this particular semester my students were struggling with “clarity of presentation, including mechanics and grammar.” I can now use these data as my assessment data and go back into that class and work on this for upcoming assignments, and I can certainly make changes before I teach the course again the following semester. Which I did. And with a new exercise about mechanics and grammar, the same assignment and the same rubrics, student performance improved substantially. That’s good. Now I move on to what else can I improve about my course/my performance. That’s continuous improvement.
But in order to gain the benefit, you have the “close the loop.” I like to think of this process from action research, as the steps are depicted below:
To truly gain the benefits of assessment, you must want to continually improve, make changes, collect data, reflect, see what the improvement areas are, and keep repeating the process. This is just as important for a novice professor as it is for someone 28 years post-PhD like me.
We expect professionals in our field to keep up-to-date and maintain their expertise as part of adhering to the APA Ethics Code, and we should expect teachers of psychology to do the same. This has also been expressed as part of the scientist-educator model (Bernstein, et al., 2010) as well as in the recent Chew et al. (2018) manifesto, which was reviewed by TNP. Lastly, what about outcomes assessment, that process that departments must complete on a somewhat regular basis to keep their institutional accreditation? Imagine if you were to use rubrics as I have described to collect assessment data, and then used the action research model to continuously improve your teaching, and you do a little write-up of that process in each annual evaluation. Then imagine that each faculty member in the department did the same thing, and the department chair collected and collated all of those write-ups into one document – as someone who as served as an external program reviewer, my belief is that accreditors would be thrilled to learn that your department was using student data meaningfully for the continuous improvement of faculty performance.
Guest post written by Eric Landrum, PhD.
Eric is a professor of psychology at Boise State University. His research focuses on identifying educational conditions that best promote student success and utilizing SoTL research to advance fellow scientist-educators. Apart from his extensive publications (including this book and this book on careers options for psych majors) and presentations, he has also served on numerous psychological committees, including (but not limited to) serving as the President of Psi-Chi (2017-2018), President of Society for Teaching of Psychology (2014), and President of the Rocky Mountain Psychological Association (2016). Eric also hosts the Psych Sessions podcast with Garth Neufel, where they discuss teaching (n' stuff) with professors from across the country.
Bernstein, D. J., Addison, W., Altman, C., Hollister, D., Komarraju, M.. Prieto, L., Rocheleau, C. A., & Shore, C. (2010). Toward a scientist-educator model of teaching psychology. In D. F. Halpern (Ed.), Undergraduate education in psychology: A blueprint for the future of the discipline (pp. 29-46). Washington, DC: American Psychological Association.
Chew, S. L., Halonen, J. S., McCarthy, M. A., Gurung, R. A. R., Beers, M. J., McEntarffer, R., & Landrum, R. E. (2018). Practice what we teach: Improving teaching and learning in psychology. Teaching of Psychology, 45, 239-245. doi:10.11.77/0098628318779264