Working Towards Ethical Engagement of GenAI in Higher Education: Insights and Recommendations for Post-Secondary Educators 

By Dr. Ki Wight, Dr. Leah Burns, and Mia Portelance 

The question of what comprises ethical use of GenAI in post-secondary education is ubiquitous in teaching and learning publications and conferences: from grappling with strategies to promote academic integrity to effective learning design for critical thinking, writing, and research skills in an era of AI tools. These sector-wide questions take on distinct importance in art and design education where creative work and technological innovation are tightly intertwined. While instructors share concerns about authorship and ethics, they also develop digital pedagogies that can engage GenAI as innovative and creative learning tools. The grappling with GenAI in art and design programs has a particular urgency: the widespread impact GenAI is having on creative processes, intellectual property rights, and labour markets is making uncertain the future of creative careers, and, thus, creative education.  

This article offers insights from a recent research project scanning academic and non-academic literature for how GenAI is understood, engaged, and regulated in art and design schools and beyond. In what follows, we highlight three themes emerging from this research: intentional pedagogical design, critical media literacies, and key concerns facing educators. We conclude with practical recommendations for ethical and human-centred GenAI engagement in post-secondary teaching.  

Why Intentional Pedagogy Matters in the Age of GenAI 

The literature consistently highlights intentional, student-centered learning design as a foundational principle for engaging GenAI ethically and effectively. Given the ubiquity of GenAI tools available to students within their personal and institutional digital spheres, the research we reviewed suggested that instructors need to design both teaching and assessments to support active engagement with these tools. Such designs may use GenAI tools as part of collaborative, student-centred learning processes. The emphasis in this literature is on prompting students to walk a path that centres their agency and critical thinking. For example, Bozkurt et al (2024) extend this emphasis by highlighting the role of metacognition in personalized GenAI engagements that support student development of “creativity, critical thinking and empathy” (p. 488). This process emphasizes student decision-making as the driver of AI inputs and the basis for how they choose to utilize AI outputs.  

Di Rosario and Ciastellardi (2024) further underscore that for ethical engagement of AI tools, learning activities and assessments also require continuous instructor-student check-ins (p. 97). Such activities can be understood as formative assessments that offer feedback to students on both their thinking and the technological processes they are engaging with in their coursework. Given the way that GenAI is disrupting art and design education, Matthews et al. (2023) assert that it is essential that creative industries education reorients towards the technological and human skillset of adaptability, along with humanistic skills of collaboration, communication, negotiation, critical thinking, judgement, and facilitation. This emphasis on adaptability is likely realized through learning designs that scaffold layers of problem-based learning that require individual student consideration in connection with other students, and, where relevant, technological inputs such as GenAI iteration (Gonsalves, 2024).  

What these examples offer us are ideas that GenAI tools may be used as part of the material processes of learning, but that the tool itself isn’t valuable without specific instruction that prioritizes student agency within the learning community. This emphasis on student agency and reflective decision-making leads directly to a second theme across the literature: the urgent need for more nuanced critical media literacies in higher education. 

Designing for Critical AI Literacies: Practical Examples 

Academic literature often states that better or more media literacy education is needed for students to be prepared for digital life beyond education. This includes more refined AI education, but what it means in practice isn’t always articulated in the literature. Literature also highlights that instructors cannot assume that students possess critical media literacy skills, which highlights the need for more nuanced educational development and articulation of critical media literacies in post-secondary education. Some articles we reviewed did the work of articulating critical AI literacy skills through practice-based assignments. For example, Park (2024) designed student assignments to test the limits and biases of GenAI images by asking students to prompt the DALL-E software to generate images of women of different races. Students were then tasked with comparing the different images generated, and they were able to identify how GenAI visual software was influenced by and reproduced systemic racist tropes. The software had created diverse images of white women but racist visual stereotypes of women of colour. This exercise enabled students to have hands-on experience with critical assessment of GenAI imagery and the development of visual design prompt strategies to move beyond theses biases. 

Another compelling approach to critical GenAI literacy is advocated by Hausken (2024) who directs students to differentiate between photographs and AI or digitally generated photorealistic images. Hausken prompts students to explore the formal elements of photos, as well as the genre and context of images. By examining how photographs are positioned, utilized, and understood in different digital and social contexts, students gain a better understanding of how to differentiate between photographs taken in real-life contexts and photorealistic images generated by AI. There is an increasing need for this kind of critical AI literacy in students given the contemporary erosion of public interest in fact-checking or seeking truth over belief (Broinowski & Martin, 2024). These are just a few examples of the kinds of critical concerns that educators must respond to as GenAI tools continue to be integrated into our educational lives.   

Critical Concerns Educators Should Address Now 

Unsurprisingly, articles ranging from academic publications to teaching and learning blogs often share a concern that GenAI use is unavoidable in post-secondary education, so grappling with the ethical challenges of GenAI is a necessary contemporary task for educators. Other common concerns we discovered across disciplines are criticisms of GenAI for centring capitalist logics; generation of algorithmic biases; contributing to environmental degradation; de-skilling creative workforces; crushing creative labour markets; increasing global inequities (particularly for the Global South), and having poor ethical, privacy, and intellectual property standards. These concerns also affect Indigenous and other under-represented communities in significant ways. GenAI systems frequently draw on cultural images, symbols, and knowledge without consent, sometimes reproducing stereotypes or distorting cultural meaning. Such practices raise issues of cultural appropriation, data sovereignty, and epistemic harm, underscoring the need for careful attention to representation and ownership in AI-mediated learning environments (Lewis, 2020). 

In art and design programs, and increasingly across other disciplines, these concerns become more complex as GenAI is embedded directly into everyday software platforms from Adobe and Figma to tools like Microsoft Word and discipline-specific applications (Mentz & Russo, 2024). As a result, students may encounter AI-generated suggestions, iterations, embedded streamlining processes, or corrections without intentionally choosing to use AI tools. This blurs the boundary between AI as a supportive aid and AI as a substitute for developing foundational skills. These conditions raise pressing pedagogical questions about when AI is functioning as a tool, when it is doing the work for students, and how to support learners who may feel destabilized or uncertain about their developing identities as creators, researchers, or practitioners (Mentz & Russo, 2024). These dynamics point to specific curriculum pressure points that educators must now consider. 

Our research suggests that academic programs and instructors need to consider the following pressure points when reviewing, changing, and developing curriculum in an era of GenAI: 

  • How does curricula support students to adapt when they are required to oscillate between up-skilling and de-skilling in response to GenAI and other digital automation? 
  • How can instructors both acknowledge and disrupt a capitalistic logic seeking continuous productivity and efficiency towards nuanced and impactful human-centred engagements with GenAI as a means of raising student digital literacy (Kicklighter & Seo, 2023; Beyai, 2025; Zhanguzhivona, 2024)? 
  • How can faculty support their programs and students to value and protect artistic practice while simultaneously training students for a creative industries job market requiring GenAI literacy (Roussel et al, 2024)? 
  • How can educators build in awareness of equity, access, and the social or environmental impact of GenAI technologies in Global South or non-Western contexts (Grájeda et al, 2024)? 
  • How can instructors support students in critically assessing how GenAI’s effects on intellectual property and privacy might influence their artistic careers (DiRosario & Ciastellardi, 2024)? 

These questions offer specific prompts for instructors or post-secondary programs reviewing their curriculum with an eye to GenAI integration. While answering these questions is critical for clarity of student learning, and to ensure ethical educational engagement of GenAI tools and topics, some of our research discovered educational approaches to using AI tools for enhanced connection and humanism in their classrooms. 

Human-Centred Futures: Hopeful Approaches  

With a critical but creative eye on GenAI, some educators are developing ways to teach with Al in the service of a caring creative world (Park, 2023). In our research, instructors emphasized that AI tools can be used in learning exercises and assessments in human-centred ways if the learning outcomes centre adaptability, collaboration, communication, negotiation, critical thinking, judgement, and facilitation (Matthews et al, 2023). Further, personal agency, creative expression, and holistic worldly well-being can be prioritized as core educational objectives alongside the skills of GenAI prompt engineering (Fisher 2023). This agency and care-first approach is where many see critical social possibility in GenAI technologies (Andreotti, 2025). Taken together, these perspectives foreground human engagement and responsibility to social context in AI-supported learning. 

Building on these human-centred approaches, some educators are leveraging GenAI to support critical engagement with digital media. The concept of counter-demo refers to how student learning occurs when they confront GenAI content that reflects misinformation and/or bias, including racism (Buolamwimi in Elemen, 2024). Park (2024) illustrates this through their exercise that asks students to have DALL-E generate images of women from different racialized groups with the resulting images reproducing narrow and derisive stereotypes (p. 37). Park prompts students to identify differences, patterns, and stereotypes and connect these to broader social norms. As a critical exercise, Park’s method quickly acquaints students with AI skillsets, such as prompt generation, while centring critical thinking about social phenomena that shape inequities within AI outputs and broader human systems and social contexts.  

In addition to such critical approaches, other research emphasizes the potential for GenAI to support aesthetic and media literacy skill development. For example, Bender (2023) suggests that teaching filmmaking with both GenAI and traditional processes can help bridge theory/practice divides as contrasting AI-generated and traditionally produced content helps students hone critical and aesthetic judgement. In this approach, students produce AI-generated material alongside traditionally produced creative work, and the differences between them offer students opportunities for nuanced reflection on technological affordances, as well as the ideologies, responsibilities, and decision-making impacts embedded in filmmaking choices.  

Instruction and Course Design: Practical Recommendations for Teaching 

While this report provides a summary of research on AI use in art and design educational practices, the critical concerns and creative approaches to using AI can also be adapted for other disciplines. Designing learning in an AI context requires us to consider potential risks, including deskilling, the devaluation of human work, shrinking job opportunities, and the biases and misinterpretations embedded in AI technologies. 

Drawing from the literature, the following recommendations support intentional and ethical instruction and course design with GenAI: 

  1. Provide clear instructions and methods for using AI as part of the creative or communication process. 
    • Frame as a co-creation tool with outputs that will require human-led iterations. 
    • Continue to emphasize critical processes over products. 
    • Require documentation of AI use and rationalization for use in the process. 
    • Centre the importance of authorship, agency, and collaboration in idea development. 
        1. Integrate critical AI and media literacy skills into course design. 
          • Talk with students about the risks and ethical challenges of using AI.  
          • Demystify AI technical processes (“under the hood” knowledge). 
          • Encourage critical analysis (fact-checking, seeking biases/misinformation) of AI-generated content. 
              1. Engage students in discussions about how to articulate and uphold their ethical commitments when using AI in academic, creative, and professional contexts. 
              1. Advocate for and centre Indigenous and under-represented communities in decision-making processes regarding AI tools at all stages. 
                • Acknowledge that GenAI systems often use Indigenous cultural materials without consent, leading to cultural appropriation and distortion. 
                • Address risks of epistemic harm including the reproduction of stereotypes or misinformation about Indigenous people. 
                • Uphold Indigenous data sovereignty by critically considering if and how cultural knowledge should be used in training or teaching with GenAI (Lewis, 2020). 
                    1. Consider how inequities, especially in relation to environmental, geo-political, and relative world economies, need to be a factor in determining how or whether GenAI is developed, used, and regulated. 

                    These examples offer insights for educators on how to centre ethics in learning designs that engage AI tools. 

                    Acknowledgements 

                    This blog post was the result of research completed with the support of an Emily Carr University of Art + Design SSHRC Explore Grant, and the research team: Asad Aftab, MDes; Dr. Leah Burns; Joshita Nagarai, MDes; Michelle Ng, MLIS; Dr. Sara Osenton; Mia Portelance; Abhishek Singh Bais; and Dr. Ki Wight. 

                    References 

                    Andreotti., V. (Aiden Cinnamon Tea & Dorothy Ladybugboss). (2025). Burnout from Humans. Creative Commons License BY-NC-ND 4.0. 

                    Bender, S. M. (2023). Coexistence and creativity: Screen media education in the age of artificial intelligence content generators. Media Practice & Education, 24(4), 351–366.  

                    Beyai, J. (2025) Policy Reflections: AI Generated Art Implications.  

                    Bozkurt, A., Xiao, J., Farrow, R., Bai, J. Y. H., Nerantzi, C., Moore, S., Dron, J., Stracke, C. M., Singh, L., Crompton, H., Koutropoulos, A., Terentev, E., Pazurek, A., Nichols, M., Sidorkin, A. M., Costello, E., Watson, S., Mulligan, D., Honeychurch, S., … Asino, T. I. (2024). The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Praxis, 16(4). 

                    Broinowski, A., & Martin, F. R. (2024). Beyond the deepfake problem: Benefits, risks and regulation of generative AI screen technologies. Media International Australia, 1329878X241288034.  

                    Elemen, J. (2024). Teaching CRITICAL GenAI Literacy: Empowering students for a digital democracy. Literacy Today (2411-7862), 42(2), 60–61.  

                    Fisher, J. (2024). Teaching Creatives to be A.I. Provocateurs: Establishing a Digital Humanist Approach for Generative A.I. in the Classroom. Tradition Innovations in Arts, Design, and Media Higher Education, 1(1).  

                    Gonsalves, C. (2024). Generative AI’s Impact on Critical Thinking: Revisiting Bloom’s Taxonomy. Journal of Marketing Education, 02734753241305980.  

                    Grájeda, A., Córdova, P., Córdova, J.P., Laguna-Tapia, A., Burgos,J., Rodríguez, L., Arandia, M., & Sanjinés, A., (2024) Embracing artificial intelligence in the arts classroom: understanding student perceptions and emotional reactions to AI tools, Cogent Education, 11 (1): Article 2378271.  

                    Hausken, L. (2024). Photorealism versus photography. AI-generated depiction in the age of visual disinformation. Journal of Aesthetics & Culture, 16(1), 1–13.  

                    Jayakumar, S., Ang, B., Anwar, N.D. (Eds.) (2021). Disinformation and fake news. Palgrave MacMillan.  

                    Kicklighter, C., & Seo, J.H. (2023) Aberrant Creativity: AI Art Exhibition Catalyzing Conversations Among Artists, Educators, and Professionals. Tradition Innovations in Arts, Design, and Media Higher Education. 1(1): Article 9.  

                    Lewis, J. E. (2020). Indigenous Protocol and Artificial Intelligence Workshops Position Paper 1 Indigenous Protocol and Artificial Intelligence Indigenous Protocol and Artificial Intelligence Working Group Honolulu, Hawaiʻi.  

                    Matthews, B., Shannon, B., & Roxburgh, M. (2023). Destroy All Humans: The Dematerialisation of the Designer in an Age of Automation and its Impact on Graphic Design—A Literature Review. International Journal of Art & Design Education, 42(3), DOI: 10.1111/jade.12460  

                    Mentz, J., & Russo, M. A. (2024). The Architect’s Journey to Specification 2024: Artificial Intelligence adoption in architecture firms: Opportunities & Risks. The American Institute of Architects.  

                    Park, Y. (2023). Creative and Critical Entanglements With AI in Art Education. Studies in Art Education, 64(4), 406–425. 

                    Park, Y. S. (2024). White Default: Examining Racialized Biases behind AI-Generated Images. Art Education, 77(4), 36–45. 

                    Rosario, G. D., & Ciastellardi, M. (2025). The Integration of Artificial Intelligence in Communication Design. Case Studies from the Polytechnic of Milan: From Digital Culture to Sociology of Media. Journal of Educational, Cultural and Psychological Studies (ECPS Journal), 30, Article 30. 

                    Roussel, R, Özer, S., Jacoby, S. & Asadipour, A. (2024) Responsible AI In Art and Design Higher Education.  

                    Zewe,  A. (2023, November 9). Explained: Generative AI. MIT News.   

                    Zhanguzhinova, M. (2024). Artificial Intelligence in Education: A Review of the Creative Process of Learning Students on Art Educational Programs. Central Asian Journal of Art Studies, 9, 289–307.  

                    ‌