Claude vs ChatGPT: Choosing the Right AI for the Job

By Adina Gray, Instructor and AI Educator, Thompson Rivers University

This BCcampus EdTech Sandbox session tackled a very practical question for educators: when you are creating or refining course components such as assignments, activities, and grading rubrics, should you use ChatGPT, Claude, or both? Rather than comparing them in theory, we walked through a short learning journey that mixed quick demos with two hands-on activities based on an upper-level business ethics course.

The session had four goals: clarify what these tools are and what they are good at, make their differences visible on the same prompts, introduce a simple approach to prompt engineering, and help participants feel ready to choose the right tool for specific teaching tasks.

Key Features

Both tools were presented as conversational AI systems that can read, write, summarize, and reason with text, but with different usage patterns.

ChatGPT has broad global reach and a large share of everyday use. Many people rely on it for personal conversations and general writing, using it as a flexible thinking partner for brainstorming, drafting, revising, and explaining concepts. For educators, ChatGPT can be a helpful collaborator for tasks such as drafting assignment instructions, creating sample explanations, and generating class activities.

Claude is more concentrated in the higher-income regions of the world (e.g., Australia, Canada, and the U.K.) and in work-focused use. It has a stronger footprint in coding, education, scientific work, and automation. In a teaching context, this makes it a good option when you want more structured reasoning on complex scenarios, support with technical material, or help working through longer or more demanding prompts.

For most day-to-day teaching work, both tools can handle the same set of jobs: generating activities, transforming them into assignments, drafting rubrics, and adjusting tone or level of difficulty for different audiences. Comparing them side-by-side lets educators see how their styles differ in practice.

Weaknesses and Limitations

This session framed both tools as powerful but highly dependent on the user’s prompting skills.

The first limitation is how sensitive they are to prompting. Common pitfalls include prompts that are too vague, overloaded with requests, or missing basic constraints such as course level or time available.  The second limitation is missing context. Unless the instructor states who the students are, what they already know, or how long the activity can take, the tools fill the information void with their own assumptions. They are strong pattern matchers, not mind readers, which means the human still has to supply course-level judgment and editing.

Recommended Activities

Two practical activities formed the core of the session and can be reused directly.

Activity 1: Generate a class activity with both tools
The first activity asked participants to use both tools to generate a short in-class activity for a real course. The demo used a third-year business ethics and society course, drawing on the course outline and module two readings. The prompt asked each tool to design a 15-minute activity that would help students apply key concepts in a realistic way and keep things interactive.

The presenter shared the responses from ChatGPT and Claude and asked participants to notice how the responses varied from both a quality and practical perspective. For example, Claude’s output felt more polished and ready to use, while ChatGPT’s output offered more creative ideas to build on.

At the end of the demo, participants were asked to upload their own course materials, write a prompt, test both tools, and compare the results.

Prompting as a Skill

At the end of the first activity, the presenter introduced a five-part prompting toolkit:

  • Be specific about context: e.g. course level, discipline, learning objectives
  • Specify format and constraints: e.g. length, timing, number of examples
  • Define the audience: e.g. who this is for and what do they already know
  • Request specific elements and examples: e.g. scenarios, questions, criteria for success
  • Iterate and refine: e.g. treat the first answer as a draft and steer the output with follow-up prompts

These strategies helped participants understand that the magic is not in the tool alone, but in how precisely they describe the teaching task.

They also tried meta-prompting, asking the tools to write their own excellent prompt for a task, and then using that prompt as a starting point. This strategy can be especially helpful for users who feel unsure of how to phrase what they want.

Activity 2: Turn the activity into an assignment and rubric
The second activity built on the previous exercise by asking participants to take one of the generated activities and ask each tool to convert it into a structured and comprehensive take-home assignment. Participants were again encouraged to use the meta-prompting technique, asking the tools to generate their own prompts before proceeding with the task. Once participants were satisfied with the assignment, they were instructed to ask the tools for assistance in creating a grading rubric.

The group reflection questions at the end of this activity included:

  • Which tool worked better for your specific task?
  • Were there moments where AI exceeded or fell short of your expectations?
  • Did your tool preference shift between the activity creation and grading rubric generation?

Key Takeaways

The closing remarks summarized four main messages that emerged during the session:

  • There is no single winner. ChatGPT and Claude both have strengths, and the best tool depends on your course, context, and preferences.
  • Prompting is a skill. It can be learned and practised, and better prompts produce noticeably better results.
  • AI assists rather than replaces. These tools work best as assistants that extend your creativity and efficiency, not as substitutes for human judgment or pedagogy.
  • Experimentation and reflection matter. The clearest way to see where each tool adds value is to try them on real teaching tasks, compare the results, and reflect on what works for your students.

Overall, the session positioned ChatGPT and Claude not as competitors fighting for a single winner, but as two useful tools in an educator’s toolkit. With thoughtful prompts and a bit of structured experimentation, participants saw how both could help them design more engaging and realistic learning experiences.

Webinar Resources and Transcript

If you missed the webinar, or want a quick refresher, you can access the webinar recordings and transcript here:
EdTech Sandbox Series: Claude vs. ChatGPT – Choosing the Right AI for the Job