Last week I posted a list of resources and technology I learned about at ACT, so I thought I would follow it up this week with a list of activities and assignments that I learned about at the conference. In no particular order, here they are!
Last week I wrote a post discussing a SoTL agenda, which came from a talk at ACT. One of the points in that agenda was to use true experimental design in SoTL research studies. Immediately following the SoTL agenda talk (in the same room even) was a talk covering a SoTL research study where they used true experimental design! What are the chances!
Full disclosure: This post is a research roller coaster. Please don’t get caught up by inspiring findings part way through and completely change your courses. Read through to the end, I promise to end on a positive note.
In an ambitious multi-institution collaboration, Bridgette Martin Hard, Shannon Brady, Melissa Beers, and Jessica Hill came together to extend a reappraisal intervention to the classroom. Their study was based off of an experiment conducted by Jamieson, Mendes, Blackstock, and Schmader (2010).
One of my favorite talks at ACT was by Regan Gurung, Andrew Christopher, and Georjeanna Wilson-Doenges. The talk was called “Inquiring Minds Want to Know: A Research Agenda for Scholarship of Teaching and Learning.” It was aptly named as they did indeed provide us a much needed SoTL research agenda.
First off, ACT was amazing! It was my first time there, and I cannot stress enough how it exceeded my expectations in every way. While there, I learned about so many new technologies and resources! I compiled a list I thought others would like to know about. In no particular order, here they are!
Next week, Jen and I will be posting about all the things we learned at the Annual Conference on the Teaching of Psychology this year in Phoenix. Here is a quick recap of the highlights from the session’s today (stay tuned next week for more details!)
Karly and Jen are heading to Phoenix on Thursday for STP’s Annual Conference on Teaching. We both noted that this is the first conference we’re going to in a long time where we aren’t presenting anything. It feels strange, but we’re excited to attend all of the great talks that are on the docket. There are so many we’ll have a hard time choosing!
Be sure to look out for our live-tweeting!
Let us know if you’ll be there and see you later this week (hopefully)!
In this research, the author, Dr. Elana Reiser, proposes a model for assessing cooperative learning models as well as individual student learning. The cooperative learning model the author describes is essentially a course where students are placed into permanent groups and learn and work together throughout the semester. Research she cites,* identifies three key benefits of group work “(1) academic gains, (2) improved race relations, and (3) improved social and affective development.” (Kagan & Kagan, 1994 as cited in Reiser, 2017). The question then is how to assess student learning, in particular individual student progress. Under the collaborative learning model, emphasis on individual progress and achievement may signal to students that group interactions are not valued (Boud, Cohen, and Sampson, 1999 as cited in Reiser, 2017). On the other hand, an assessment that only measures group performance, sometimes fails to measure an individual’s progress and achievement. The author thus argues that neither the purely individual nor purely group assessments are appropriate for collaborative learning models.