By Gwen Nguyen, advisor, Learning and Teaching, BCcampus
As a busy summer draws to a close, I’m reflecting on my experiences, from the BCcampus Regional Roadshow to the international Teaching and Learning with AI conference to the meaningful S’TENISTOLW at Camosun College. Amidst these reflections, the words of Dr. Bonnie Henry, “Be kind, be calm, be safe,” echo in my mind. Her message, delivered with timely grace during the pandemic, not only resonated with me as a citizen of British Columbia but also illuminated my role as an educator in these challenging times in higher education, marked by the turbulent waters of AI and other social issues such as equity, diversity, inclusion, and decolonization. In the first post of this season’s Digital Pedagogy Toolbox series, I wish to explore how these principles can guide us as we approach Artificial Intelligence (AI) in teaching in higher education.
Please join me on this journey with a warm and open heart.
Be kind
Be kind to yourself
In the relentless influx of AI discussions and decisions, maintaining kindness to oneself is crucial. One of the check-in activities I often use with educators is to ask, “How would you rate your feelings on a scale of one (being pessimistic) to five (being optimistic) when you encounter another discussion related to AI?” Educators in Canada typically select two or three, indicating reluctance or concern, while some in the US might choose four or five, reflecting a more optimistic approach to the integration and often at a faster pace. How about you right now? How do you react to knowing that 100% of students report using AI, but many do not view it as cheating (Bowen & Watson, 2024)?
I’m often referred to as a Generative AI (GenAI) expert, but I humbly disagree. Becoming an expert in this field is daunting. I read extensively, write, and deliver workshops, yet I’ve made mistakes along the way. There are days I feel overwhelmed, unsure of my next steps in the evolving landscape of higher education.
When we hear “be kind” we often think of extending kindness to others. However, I believe that being kind to ourselves, which is often forgotten, should be a priority as we resume our academic and teaching roles this fall. Only when we are kind to ourselves, understanding our strengths, limits, and needs, can we better open our hearts to others.
As Lorde (1988) highlighted the necessity of self care, “Caring for myself is not self-indulgence, it is self-preservation and that is an act of political warfare” (p. 130). This perspective shifts self-care from optional to essential, positioning it as a radical act of acknowledging one’s worth and importance in the educational ecosystem. Practical steps include:
- Scheduling regular AI breaks where we can step away from AI-related content, reducing the risk of burnout and information overload.
- Personalizing AI integration pathways where we can adopt AI tools at our own pace, taking our personal comfort levels and readiness into consideration, thus promoting a healthier approach to technological adoption.
- Building an AI sandbox where we can play, experiment, explore, and fail with AI technologies.
Be kind to others
What concerns are most pressing when navigating the turbulent waters of AI in higher education? Common concerns include falling behind in AI advancements and the potential impact on students’ development of critical thinking skills. Maha Bali (2024) emphasizes the need to cultivate compassion in our learning designs and teaching approaches: “We need to support educators and learners with critical AI literacy development, decide which uses of AI might be harmful or helpful, and invite learners into the process of deciding the policies for AI use within classes and institutions.”
Being kind to others in this context means:
- Creating safe spaces for experiential learning: consider building an AI sandbox for your class or community of practice that allows students or peers to explore AI technologies at their own pace. This space would be free from judgment, facilitating the review of tools and fostering the exchange of innovative ideas in learning and teaching.
- Cultivating empathetic leadership: we should lead from where we are and by example.When we integrate AI disclosure statements into our coursework, we foster an environment of honesty and transparency. For instance, stating “This presentation was prepared with the assistance of the Gamma tool,” sets a precedent for trust and accountability.
- Making informed decisions on AI technologies: mindfully select AI technologies in our classroom considering ethical implications, institutional technology support, affordability, and cognitive load on learners. Continually reflect on these decisions and evaluate whether AI technologies enhance some human qualities, such as collaboration, community, trust, creativity, and critical thinking, that we value in our practice.
- Being patient and understanding with others’ learning abilities and stories: offer generous support and listen attentively to others. Adopt a compassionate approach when addressing students who might use GenAI in unauthorized ways. Despite established guidelines, students might resort to inappropriate AI use due to time constraints or irrelevant assignments. It is essential to discuss the importance of integrity and the risks associated with educational shortcuts. By engaging students from the outset and providing adequate support, such issues can be mitigated (Bali, 2024).
Be calm
“Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less,” Marie Curie.
We learn from different sources that AI’s growth could lead to increased inequities or threaten job conditions. Staying calm, informed, and proactive, rather than reactive, is crucial.
Bowen & Watson (2024) suggest that we should stay calm. When we are calm, we are aware that we already use AI extensively — from Netflix’s predictive algorithms based on our viewing habits to Amazon’s product recommendations to the chatbots that assist us with airline tickets. Kabat-Zinn (1990) also suggests that when the mind is as still as clear water, it perceives things more clearly and that is when we can use all its potential. In this context, as we slow down, we will see the relevance of our ongoing research in digital literacy, emphasizing that competencies such as information literacy, digital wellbeing, and ethical considerations remain vital when integrating AI technologies in education.
By taking time to quiet the mind, we can reflect on AI’s capabilities and limitations, thus reducing fear and promoting more thoughtful engagement with this technology. We can consider a comprehensive strategic plan on how to integrate those into our teaching and learning: re-thinking the purpose of education, assessment, how to use critical thinking skills to break down problems, how to ask better questions to maximize the results from those AI technologies, and how to transform hallucinating into creative problem solving (Bowen & Watson, 2024).
Stay calm because we are in this together and, collectively, we will find ways to help each other thrive in this digital age. At BCcampus, we have offered several workshops and resources related to this topic. This fall, we are launching an EdTech Sandbox focused on AI, designed to provide educators, graduate students, and researchers a space to explore, review tools, and draw inspiration for teaching and learning. This initiative will demystify AI and leverage our existing digital literacy skills and research-informed pedagogies to engage thoughtfully with these technologies. We will also offer a GenAI toolkit for educators in our next Digital Pedagogy Toolbox post.
Be safe
“There’s such a wild west world out there and students have all the horses and the gun,” Overheard.
In the metaphorical wild west of digital landscapes where students are equipped with a waterfall of AI tools akin to horses and guns, the question of safety becomes critical. How do you ensure safety for yourself and your learners in such a dynamic environment? It is essential to understand and impart the ethical and legal considerations surrounding AI to navigate this terrain responsibly.
For example, it’s important to have an open conversation with peers and students about biases in AI technology. Biases may stem from the data it was trained on, the intentions of its creators, or even the feedback from human reviewers. Furthermore, biases can be embedded within the network architecture, decoding algorithms, model objectives, and other less apparent aspects of AI models. While AI’s capacity to “hallucinate” can lead to novel combinations of ideas and words, it also presents unique ethical challenges (Bowen & Watson, 2024).
For resources, explore Leon Furze’s series of articles on different aspects of AI Ethics. This series delves into the nuanced aspects of AI ethics, exploring how they intersect with writing instruction and broader educational practices. Each article provides a thorough examination of a specific ethical issue, as well as offers practical suggestions for educators seeking to integrate these discussions into their classrooms.
The ethical dimensions of AI technology usage in education are critical. Educators should be proactive in learning and teaching the ethical and legal implications of AI tools to protect themselves and their students. This involves:
- Data protection: implementing best practices for data privacy and security, ensuring that both educator and student information remains confidential and secure.
- Ethical AI use: teaching and advocating for the ethical use of AI, addressing issues such as algorithmic bias, and promoting fairness and equity in AI applications.
Recently, a professor who I approached as a graduate student seeking guidance on my research proposal attended one of my workshops on GenAI literacy. His presence, along with conversations with my father about the transformative impact of artificial intelligence, prompted me to reflect on the fundamental question: what is the purpose of higher education?
When we consider ‘higher education’ or ‘teaching and learning in higher education settings,’ traditional images like blackboards, chalk, desks, and laptops might spring to mind. Yet, as Ingold (2013) suggests, learning is not a product. It is a dynamic process where knowledge is cultivated through all our interactions rather than transmitted statically. As we traverse this varied and changing terrain, we chart our paths collectively, each step a creative act that contributes to the evolving tapestry of our educational landscape. The journey is inherently meandering, a reflection of the organic and often unpredictable process of learning and teaching (Ingold, 2013).
In this spirit, let’s continue to teach and learn together with resilience, guided by the principles of “Be kind, be calm, be safe.” As Dr. Bonnie Henry (2021) stated, “I believed that by recognizing our need for connection, compassion, and community, acknowledging that we’re in this together, and cultivating a sense of common purpose we would build a resilience that would support us through this storm” (p. 203). The outcomes of our endeavours are uncertain — we might succeed, or we might fail again and again; but, for now, let’s keep trying, knowing that we have each other.
References
Henry, B., & Henry, L. (2021). Be kind, be calm, be safe. Penguin Canada.
Bali, M. (2024). A compassionate approach to AI in education. https://knowledgemaze.wordpress.com/2024/04/29/a-compassionate-approach-to-ai-in-education/
Bowen, J.A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. John Hopkins University Press.
Ingold, T. (2013). Making: Anthropology, archeology, art and architecture. Routledge.
Furze, L. (2023). Teaching AI Ethics.
https://leonfurze.com/wp-content/uploads/2023/02/Leonfurze_com_AIEthics.pdf
Kabat-Zinn, J. (1990). Full catastrophe living: Using the wisdom of your body and mind to face stress, pain, and illness. Dell.
Lorde, A. (1988). A Burst of Light: And Other Essays. Firebrand Books.