THE NOVICE PROFESSOR
  • Home
  • Blog
  • Course Studies
  • About
    • Meet the authors
  • Resources

 The Novice Professor Blog

Meet the Authors

The Parsing of Grade and Assessment Processes for Faculty Member Continuous Improvement

7/19/2018

0 Comments

 
This week, we asked guest contributor Eric Landrum about using grade information for faculty assessment purposes. Below, he gives informative examples about how he utilizes detailed rubrics as a form of assessment for faculty improvement. 
First, I’m thrilled to be able to contribute this blog post to The Novice Professor.  The social media and online presence of TNP is so positive; I am pleased to be supportive of these efforts.
 
How can a faculty member leverage the benefits of assessment for their own continuous improvement, while at the same time satisfying both students’ needs for grading feedback and institutional needs for meaningful assessment data?  An answer follows in these paragraphs; it is possible, but it takes careful planning and forethought, and this will not happen easily nor without effort.

​​R. Eric Landrum, PhD

Department of Psychological Science
Boise State University
1910 University Drive
Boise, ID  83725-1715
 
E: elandru@boisestate.edu
T: @ericlandrum
​First, some definitions of course, and my definitions may vary a bit from others.  Grading is a process by which feedback is provided to students to help improve their performance; assessment is a process by which feedback is provided to faculty to help improve their performance.  Are grading and feedback synonymous – not usually, although with careful planning, the same process can serve both grading and assessment purposes.  Huh?
 
Please allow me to use my own example.  I teach an undergraduate, 300-level Research Methods course.  Each student conducts a survey study, and they end the course with a completed APA manuscript.  Early in the semester they write the first draft of their Introduction section, an assignment which is worth 100 points.  I grade that assignment, meaning that I provide feedback to each student to help them improve their future performance.  If the class average is 86, that is grade data, and not very helpful to me as an instructor.  That is, that average grade does not tell me much about how I can improve the next time I teach the course, how I might tweak the assignment instructions to make things clearer, how I might design a better in-class activity leading up to the Introduction section due date, and so on.  Grade data will typically not help me improve as an instructor.
 
However, if I alter my practice a bit, and I use rubrics to evaluate subgoals of the Introduction section assignment, I can get to what I call assessment data.  So that is what I did in my Research Methods course.  My 100-point assignment was subdivided into three rubric areas, like this:
  • completion of assignment (25 points)
  • proper use of APA style and formatting (35 points)
  • clarity of presentation, including mechanics and grammar (40 points)
Students still need to receive grades, and they do, but now the rubric outcomes are my assessment data.  Using a learning management system, I can run a “rubric statistics report,” which looks like this:
Picture
As you can see (from my real data), this particular semester my students were struggling with “clarity of presentation, including mechanics and grammar.”  I can now use these data as my assessment data and go back into that class and work on this for upcoming assignments, and I can certainly make changes before I teach the course again the following semester.  Which I did.  And with a new exercise about mechanics and grammar, the same assignment and the same rubrics, student performance improved substantially.  That’s good.  Now I move on to what else can I improve about my course/my performance.  That’s continuous improvement.
 
But in order to gain the benefit, you have the “close the loop.”  I like to think of this process from action research, as the steps are depicted below:
Picture
​To truly gain the benefits of assessment, you must want to continually improve, make changes, collect data, reflect, see what the improvement areas are, and keep repeating the process.  This is just as important for a novice professor as it is for someone 28 years post-PhD like me.
 
We expect professionals in our field to keep up-to-date and maintain their expertise as part of adhering to the APA Ethics Code, and we should expect teachers of psychology to do the same.  This has also been expressed as part of the scientist-educator model (Bernstein, et al., 2010) as well as in the recent Chew et al. (2018) manifesto, which was reviewed by TNP.  Lastly, what about outcomes assessment, that process that departments must complete on a somewhat regular basis to keep their institutional accreditation?  Imagine if you were to use rubrics as I have described to collect assessment data, and then used the action research model to continuously improve your teaching, and you do a little write-up of that process in each annual evaluation.  Then imagine that each faculty member in the department did the same thing, and the department chair collected and collated all of those write-ups into one document – as someone who as served as an external program reviewer, my belief is that accreditors would be thrilled to learn that your department was using student data meaningfully for the continuous improvement of faculty performance.
Guest post written by Eric Landrum, PhD.
Eric is a professor of psychology at Boise State University. His research focuses on identifying educational conditions that best promote student success and utilizing SoTL research to advance fellow scientist-educators. Apart from his extensive publications (including this book and this book on careers options for psych majors) and presentations, he has also served on numerous psychological committees, including (but not limited to) serving as the President of Psi-Chi (2017-2018), President of Society for Teaching of Psychology (2014), and President of the Rocky Mountain Psychological Association (2016). Eric also hosts the Psych Sessions podcast with Garth Neufel, where they discuss teaching (n' stuff) with professors from across the country.
References
 Bernstein, D. J., Addison, W., Altman, C., Hollister, D., Komarraju, M.. Prieto, L., Rocheleau, C. A., & Shore, C. (2010).  Toward a scientist-educator model of teaching psychology.  In D. F. Halpern (Ed.), Undergraduate education in psychology: A blueprint for the future of the discipline (pp. 29-46).  Washington, DC: American Psychological Association.
 
Chew, S. L., Halonen, J. S., McCarthy, M. A., Gurung, R. A. R., Beers, M. J., McEntarffer, R., & Landrum, R. E.  (2018).  Practice what we teach: Improving teaching and learning in psychology.  Teaching of Psychology, 45, 239-245.  doi:10.11.77/0098628318779264
0 Comments



Leave a Reply.

    RSS Feed

    Archives

    October 2020
    May 2020
    April 2020
    March 2020
    February 2020
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018

    Categories

    All
    Assessment
    Bob
    Brian
    Ciara
    Conferences
    Favorite Things
    Grading
    Grad School
    Guest
    Guest Post
    How I Got Here
    Intro Psych
    Jen
    Jenel
    Job Market
    Karly
    Learning
    OER
    Online
    Pop Culture
    Reflection
    Research
    Research Methods
    SoTL
    Statistics
    STP
    Student Perspectives
    Student Resources
    Teaching
    Tech Corner
    Writing

Picture
Home   Blog   Course Studies   About  
© COPYRIGHT 2020. ALL RIGHTS RESERVED.
Photo used under Creative Commons from Carol (vanhookc)
  • Home
  • Blog
  • Course Studies
  • About
    • Meet the authors
  • Resources