A Call to Educators: Why We Need to Talk About AI in Week One

By Adina Gray, Instructor and AI Educator, Thompson Rivers University

Background: Why Week One Sets the Tone

Students are already using AI. Week one is our chance to make that use visible, ethical, and learning-centered.

In May 2025, I facilitated a BCcampus FLO Friday session on responsible AI use. The message from educators across B.C. was clear: learners are walking into our courses with AI in their pockets yet many are unsure what is allowed, what is wise, and what is safe. I believe starting the AI conversation in week one is critical, not only to support student success in the classroom, but also to prepare them for a workplace where AI literacy is rapidly becoming a core requirement.

This need became especially clear in my own teaching last semester. When I asked my class who was using AI for school, only a few hands went up. But when I shared how I use it—brainstorming activities, drafting case studies, and generating readings—students opened up about their own experiences. Some admitted they use AI daily, others had concerns about misinformation or plagiarism, and most were confused by inconsistent course policies. With one instructor banning AI, another ignoring it, and a third encouraging it, students told me AI often feels taboo. As the semester progressed, I noticed students becoming more open. After I demonstrated a few AI tools and encouraged them to treat AI as a collaborator in class activities, many told me how rare it was to have an instructor do that. They said they appreciated the openness and that it helped them learn more effectively.

Survey data echoes this and what I witnessed in class reflects what broader surveys are showing. Microsoft’s 2025 AI in Education Report found that while most students and educators have tried AI, fewer than half feel that they know a lot about it, and many say they need clearer guidance and training. The workplace picture is equally urgent. The World Economic Forum’s Future of Jobs Report 2025 shows that 86% of employers expect AI to transform their business by 2030, with most planning to reskill their workforce and hire for AI-specific skills. LinkedIn’s AI and the Global Economy echoes this trend: 66% of leaders say they would not hire someone without AI literacy, and the demand for AI skills has surged more than sixfold in job postings over the past year.

Taken together, these insights underscore the need for proactive engagement. This year, I am going further in my classes. I will start the term by inviting students into a transparent, co-created approach to AI, talking openly about when it helps, when it harms, and how we will use it ethically and productively in our course. The bottom line is that students need clear expectations and a safe place to practice judgement with these tools, and week one is the best time to set that tone.

Strategy: Talk About AI in Week One

Here is the simple idea: make AI part of your first-week conversations, just like academic integrity or participation policies. Doing so sets the tone for transparency, curiosity, and critical thinking.

1.  Share your stance openly

Clarify when AI is encouraged, when it requires permission, and when it is off-limits. Clear expectations reduce confusion and build trust.

Example: “In this class, we will explore when and how AI can be used and when it should not be used. Our goal is to use AI thoughtfully, not to replace your learning.”

2.  Invite student input

Ask students how they are already using AI. This acknowledges their experience and signals that AI use is not taboo.

Example:“How are you already using AI tools? Where have you found them helpful or frustrating?”

3.  Use the traffic-light framework

The traffic-light system is a simple, visual way to communicate expectations across assignments:

  • Green: allowed (e.g., brainstorming, idea generation, drafting outlines)
  • Orange: possible but requires permission or citation (e.g., refining drafts, summarizing, grammar checks)
  • Red: prohibited (e.g., submitting AI-generated work as your own)
ColourMeaningExamples of Instructor Language
Green – Encouraged UseStudents can freely use GenAI as a tool to support learning.“You are encouraged to use tools like ChatGPT for brainstorming, drafting, and checking grammar in this course. Be sure to review, edit, and reflect on the AI’s output—it should support your thinking, not replace it.”
Orange – Use with CautionGenAI can be used in limited ways, with transparency.“You may use GenAI tools to generate ideas or outlines for your assignments, but all final writing must be your own. If you use AI in any part of your work, include a brief statement at the end describing how you used it.”
Red – Not PermittedGenAI is not allowed due to the nature of the task.“Do not use AI tools to complete this assignment. The goal is to assess your original thinking and understanding. Using AI in this context will be considered academic misconduct.”

4.  Model responsible use

Show students how you use AI yourself: for brainstorming discussion questions, drafting case studies, or designing inclusive activities. When instructors model responsible use, it reduces stigma and demonstrates integrity in practice.

5.  Acknowledge complexity

Remind students that AI is powerful, but imperfect. It can spark creativity and save time, but it also raises concerns about bias, misinformation, and over-reliance. Naming this complexity helps establish a classroom culture where AI is used with critical awareness.

Implementation: How to Run the Week One Conversation

Talking about AI in week one doesn’t need to be complicated. Here are some practical ways to make that first conversation meaningful and memorable:

Start with a quick poll or show of hands
Ask: “Who has used AI for school, work, or personal projects?” This surfaces the diversity of student experiences and makes AI use visible.

Walk through the traffic-light chart
Explain what each category means for your course. Invite questions and ask students to suggest examples of tasks they think belong in each colour.

Invite student stories
Have students share in pairs one way they’ve tried (or avoided) AI so far, and then gather a few examples as a group. This signals that their voices matter and helps normalize open conversation.

Examine a short AI-generated example
Bring a one-paragraph AI output relevant to your discipline. Ask: “What is strong here? What is missing? Would you trust this?” This sparks critical reflection and shows that AI is imperfect.

Co-create class guidelines
Conclude by drafting a short “AI in this course” agreement together. This might be one or two points you include in your syllabus addendum.

These strategies create clarity and trust in week one, laying the groundwork for ethical, critical, and confident use of AI throughout the course. Once you’ve tried these simple entry points, the next step is to keep the momentum going.

Recommendations: Where to Start

You don’t need to have all the answers or redesign your entire course. What matters is opening the door in week one and signaling that AI can be discussed openly, critically, and responsibly.

Here are a few simple ways to begin:

  • Break the silence: mention AI in your first class, just as you would with academic integrity or participation policies.
  • Keep it simple: share one or two clear expectations using plain language and a visual like the traffic-light chart.
  • Start small: choose one activity to open the door — whether it’s a quick poll, sharing your stance, or reviewing an AI example together.
  • Be transparent: let students know you are learning too, and invite them to help shape responsible practices in your course.
  • Keep the conversation alive: Revisit what was discussed and agreed upon in week one. Remind students of the guidelines you created together, and continue weaving AI conversations into activities and assessments across the semester.

Looking Ahead: From Week One to Workplace Readiness

Over the past two years, I have led AI literacy initiatives at my university, presented at international conferences across Canada, the U.S., and Europe, and collaborated with industry leaders, government officials, and civil society organizations. Across all of these settings, the message has been strikingly consistent: AI skills are no longer optional — they are essential for new hires in nearly every profession, and universities are under growing pressure to prepare graduates accordingly.

I believe, that as educators, we carry a responsibility to engage with AI ourselves and to model its ethical and effective use. Opening these conversations with our students in week one can lay the groundwork for their confident and responsible use of AI throughout their studies and into their future careers.

Acknowledgement

This blog is the result of a true human-AI collaboration. The final version emerged through many back-and-forth conversations between Adina, who provided the ideas, stories, and framework, and ChatGPT, who acted as an editorial partner to refine, enhance, and shape the writing.