Post by 2020–2021 BCcampus Research Fellow Elle Ting instructional associate at Vancouver Community College’s Centre of Teaching, Learning, and Research. Read Elle’s first post.
The creation of the Alternative Assessment Toolkit applies what Vancouver Community College (VCC) instructors shared with our research team during the first phase of our project, which examined the implementation of alternative-assessment strategies for emergency remote learning that took place in the early weeks of the COVID-19 pandemic in March and April 2020. A focus of our study was how instructors defined a “successful” assessment strategy in terms of accurately measuring student learning, supporting student access, and protecting academic integrity. In analyzing the information gathered in our literature review, survey, focus groups, and other institutional data, we drew the following preliminary conclusions:
- Instructors’ perceptions of successful assessments changed significantly following the pivot to online learning, in particular their level of confidence in quizzes, tests, and exams, all of which were viewed as much less reliable in an online format than in a face-to-face context.
- Instructors’ baseline confidence in take-home assessments was low before the pivot and fell further in the move to emergency remote learning. This resulted in decreased use of take-home assessments overall.
- Many assessment tools deemed successful by instructors (e.g., Zoom invigilation, time limits) were stressful for learners; in the effort to eliminate the opportunity for academic misconduct, learner stress often became collateral damage.
- Instructors expressed a high degree of comfort and engagement learning new tools but noted a need for more time to develop curriculum and become more proficient in using the Moodle learning management system (LMS).
In our deep-dive into alternative assessments with a representative cross-section of instructors, it was encouraging to see diverse and creative approaches, all unified by the shared goal of supporting student success during a turbulent and unpredictable time. Instructors rose to the occasion, going well beyond just making the best of a bad situation to develop novel solutions in the emergency remote learning context that have continued to evolve into quality online education.
Nevertheless, we also saw opportunities to improve academic integrity protection, specifically by reconsidering student motivation for committing academic fraud. The assessment strategies adopted in response to increased reports of educational misconduct focused almost exclusively on eliminating the opportunity to violate academic integrity. However, per Donald Cressey’s fraud triangle model, which serves as the theoretical framework for our research study, opportunity is merely one factor in determining ethical risk: the other factors are pressure and rationalization, and all three sides of the fraud triangle interact and intersect, such that impact on one can have knock-on effects on the others. Consequently, cracking down on opportunity risks by imposing stress-inducing assessment conditions can backfire; elevated student stress can increase both perceived pressure (“If I fail this timed test, I’ll fail the course, which means I’ll fail the whole program!”) and perceived rationalization (“I have to cheat on this paper—if I don’t, I’ll fail my program, and my life will be ruined!”), which in turn result in a greater likelihood of academic misconduct. For this reason, our research findings support the use of alternative assessment solutions that balance effectiveness with lower student stress.
The Alternative Assessment Toolkit is designed to facilitate instructor decision-making in the implementation of learning evaluation activities. Structured as an interactive decision tree, instructors can “choose their own adventure” (much like the 1980s gamebook series of the same name) by using what they know about their courses, their learners, and relevant teaching and learning goals to arrive at an organized set of academic integrity solutions.
The applicability of assessment options and the ease with which these can be incorporated into course design may vary widely by subject and context. As such, we can offer no prescriptive advice about the best options for appropriate assessment, aside from noting that a backward design approach to using the toolkit optimizes its effectiveness. For example, in a workshop that we attended with faculty from VCC, it was noted that while time limits were indeed stress-inducing for students, the national certification exam that graduates had to pass as an entry-to-practice requirement featured time limits; because time limits were unavoidable in this case, we regarded students’ familiarity/readiness for time-limited activities as a learning outcome in its own right and recommended the inclusion of time limits in no-stakes or low-stakes assessments. Determining the learning goals and using these as the point of departure for planning assessment allows instructors and curriculum developers to rethink academic integrity protection by centring stress-reduction measures instead of fixating on opportunity alone.
It is also worth noting that while Moodle is the LMS used at VCC, the toolkit is meant to be platform-agnostic and is built around assessment types and strategies. We expect the LMS features included in the toolkit can be implemented on various systems, and many of the alternative assessment options would apply to different delivery models. The use of an academic integrity reminder, for instance, would be as applicable to a face-to-face classroom as it would be in an online or hybrid environment. While it was a shift to online learning that prompted the development of the study and subsequent toolkit, the lessons learned in the process will be helpful in planning for educational quality across different modalities and scenarios moving forward.
Conclusion and Future Work
This study was initially designed to examine instructors’ use of alternative assessment to protect academic integrity following an institutional decision not to adopt third-party online proctoring tools. As the post-secondary sector emerged from the trenches of emergency remote teaching, it became clear that much of what instructors had implemented in the initial pivot to online had become normalized as good practice. Particularly, VCC instructors’ experimentation with alternative assessments in the pivot proved to be an effective and, most important, more humane, equitable, “people-centred” approach (Silverman et al.) to preventing academic misconduct.
There remains an opportunity to facilitate further conversation about alternative assessments and perform some myth-busting that goes beyond the functionality of the toolkit artifact. To advance that work, our team is designing a game as a companion piece to the toolkit. The game, which identifies and challenges common misconceptions or myths about alternative assessment, is meant to encourage the user to reflect on and address barriers to adopting or expanding alternative assessment.
Finally, what lays beyond the scope of this project in terms of time and resources but in retrospect reveals an especially important avenue for future research is the students’ point of view regarding academic integrity—their motivations for upholding or violating academic integrity and prevention/intervention of educational misconduct. A valuable sequel to this project would be an examination of student perspectives on academic integrity, which besides offering further insight into learners’ perceptions could also be a starting point for instructor–student collaboration in designing protection strategies. As relatively simple co-created measures (such as academic integrity reminders) have shown promise in preventing misconduct while minimizing learner stress, more activity in this area is needed to support the collaborative development of tools, processes, and policies to uphold academic integrity.
A sudden, forced shift to emergency remote learning in response to the pandemic challenged all of us engaged in the work of post-secondary education to reframe “successful” teaching and learning. The process of reflecting on and defining the fundamental conditions for success highlighted the need to balance the protection of academic integrity against a commitment to access. What began as an exile from the familiar patterns of work, evaluation, and feedback quickly grew into productive experimentation with different assessment models. As we collectively prepare for more new normal uncertainties, it is this adaptability that has emerged as our greatest asset in maintaining readiness and excellence in teaching and learning.
Note: The Alternative Assessment Toolkit is still in its final stages of completion. When available, we will include a link to where it can be found on Elle’s Research Fellow page.
This research is supported by the BCcampus Research Fellows Program, which provides B.C. post-secondary educators and students with funding to conduct small-scale research on teaching and learning, as well as explore evidence-based teaching practices that focus on student success and learning.
Anderson, F. T., & McDaniel, M. A. “Restudying with the quiz in hand: When correct-answer feedback is no better than minimal feedback.” Journal of Applied Research in Memory and Cognition, number 10, 2021, 278–288.
Jensen, J. L., McDaniel, M. A., Woodard, S. M., & Kummer, T. A. (2014). “Teaching to the test…or testing to teach: Exams requiring higher order thinking skills encourage greater conceptual understanding.” Educational Psychology Review, number 26, 2014, 307–329.
Silverman, Sarah et al. “What Happens When You Close the Door on Remote Proctoring? Moving Toward Authentic Assessments with a People-Centred Approach.” Educational Development in the Time of Crises 39, issue 3, Spring 2021, DOI: https://doi.org/10.3998/tia.17063888.0039.308
© 2022 Elle Ting released under a CC BY license
The featured image for this post (viewable in the BCcampus News section at the bottom of our homepage) is by Mikhail Nilov from Pexels