Brewing Dialogue: Coffee Chats With AI-Powered Philosophers

By Manu Sharma and Brad Forsyth, Thompson Rivers University

The inspiration for developing the History and Philosophy of Education course in an innovative way using artificial intelligence — one that meets students in the increasingly digital world of 2025 — stemmed from a desire to push beyond my comfort zone while enhancing meaningful student engagement. This was my first time teaching this course, which students often dislike due to its heavy reading load, dense content, and required course status.

I always loved studying philosophy and wanted to share the work of philosophers from around the world within the context of education. This global approach was especially important as 19 of the 21 enrolled students were international students, with two domestic students from a local mid-sized city in British Columbia.

Considering how to bridge artificial intelligence with a content-heavy course, I recalled a long-held aspiration to create a Socrates Café, a space for philosophers to engage with students and exchange ideas. Until recently, this vision was abstract, limited to students’ internal conceptualizations rather than lived dialogue. In the era of artificial intelligence, it suddenly became possible. I began exploring chatbots and AI to bring philosophers’ personas to life by carefully shaping their depth and accuracy using primary and secondary open-access sources.

This possibility was both exciting and technologically challenging. As a result, I reached out to my colleague, Brad Forsyth, who had been doing innovative work with AI through the learning technology and innovation team at Thompson Rivers University (TRU), and he was open to the challenge of co-designing this course with me.

When we met in the spring of 2025, we began brainstorming what chatbots embodying the views of deceased philosophers might look like, as well as whether this approach constituted an ethical and responsible use of artificial intelligence. The philosophers I selected for the course included Freire, Greene, Plato, Wiredu, Tagore, Confucius, and Dewey. Using this list and a template provided by Brad, I was able to generate the foundational structures of each philosopher’s educational philosophy by identifying key concepts and themes and linking them to relevant primary and secondary sources.

While Brad focused on issues of privacy and determined which platform would be the best to host the chatbots, I concentrated on designing a scaffolded assignment that would run alongside the course over its 13-week duration. I developed what I called the chatbot assignment, which was divided into four parts. This assignment emphasized engagement with the chatbots through Socratic questioning and dialogue, encouraging students to enter into sustained conversations with a philosopher of their choice.

I believe that Socratic dialogue provides a gentler entry point into philosophical inquiry, particularly in a 13-week course where dense philosophical texts can be difficult to fully grasp through reading alone. This approach also humanizes the philosophers’ ideas, allowing students to experience individualized interactions that offer encouragement and thoughtful connections between theory and practice. Through application-based and critical thinking questions, students were able to develop a deeper understanding of philosophical concepts and content knowledge than is often achieved in traditional graduate courses with heavy reading expectations alone.

Strategy/Tips

Once we had a vision for the activity, Brad created a working prototype. Custom chatbots provided two main benefits compared to sharing a copy-and-paste prompt with students: (1) tailored instructions to ensure it behaved within the boundaries we set, and (2) a curated knowledge base comprised of primary and secondary resources to reduce inaccuracies. He evaluated several platforms to determine which would meet our needs based on the following criteria: privacy, trainability, and performance.

First, Brad tried Copilot with AI agents as it meets privacy standards at TRU; however, it lacked file upload capabilities and refused to roleplay certain philosophers due to internal guidelines. Next, he tried Poe, which met our training needs and performed relatively well, but conversations timed out too quickly. We eventually settled on ChatGPT’s custom GPT feature. ChatGPT has not passed a privacy impact assessment at TRU, so Brad consulted with the privacy office at TRU and made two proposals that secured clearance: (1) we educate students on data collection, storage, and responsible use, and (2) we provide alternatives for students who did not wish to create an account.

Brad then researched best practices and existing examples to develop a modifiable instruction template for each philosopher. This process focused on structuring and wording instructions so the AI could reliably interpret them, while ensuring the role-play felt authentic, knowledgeable, and within scope. Writing instructions will vary by purpose and context, but the structure below offers one approach you may find useful.

  • Purpose/goal/scope: explains why the chatbot exists, what the learning objectives are, and the intended audience, helping your chatbot stay focused and aligned with your goals.
  • Role: defines the role or persona your chatbot will adopt, giving your chatbot a consistent voice and set of responsibilities.
  • Interaction guidelines: shapes how the chatbot communicates (e.g., tone/style, conversation parameters, and how information should be released).
  • Conversation/simulation flow: provides structure for the interaction from beginning to end.

Here are some general design tips to improve accuracy and help your chatbot stay within the role you have defined for it:

  • Define your learning goal: specify who the chatbot is for and what learning problem it supports. A clear goal should guide your instructions, knowledge base, and tool settings.
  • Design pedagogically guided, adaptive responses: employ effective teaching practices (e.g., scaffolding, prompting reflection, withholding full answers etc.) and specify how the chatbot should adapt its responses based on student input (e.g., offering hints when students struggle and extending challenges when they show understanding).
  • Separate behaviour from content: use the instructions field for behavioural rules, using clear if/then logic and leveraging markdown to enhance clarity and readability. Upload reference/training material as knowledge files, enabling code interpreter to ensure they can be effectively interpreted.
  • Constrain hallucinations and scope creep: specify how and when your chatbot should rely on its knowledge base, limit off-topic responses, and consider disabling web search.
  • Test: Test and refine repeatedly to ensure your chatbot behaves appropriately and produces relevant responses. Consider inviting other perspectives.

Implementation

As we moved into course design, addressing both technological requirements and pedagogy, we finalized the delivery of the AI-based History and Philosophy of Education course. We co-taught the first class, during which my colleague Brad reviewed privacy considerations and the ethical use of ChatGPT so students understood their rights and responsibilities when engaging with AI. I introduced the four-part chatbot assignment and outlined the broader course design, including additional required assignments.

I emphasized that the course was designed as a personal and reflective learning space where students’ voices could emerge through varied assignments, including video presentations, class presentations, peer reviews, and discussions. These activities were complemented by structured interactions with chatbots in ChatGPT. What follows is an overview of the four components of the chatbot assignment.

The assignment was intentionally scaffolded with regular check-ins to support student success and allow space for experimentation and revision. Brad developed a website hosting links to each chatbot, and I explained that the course would focus on one philosopher per week to build a strong conceptual foundation. Each week included pre-readings, curated videos, short mini-lectures, and in-class activities such as Chai Conversations and Four Corners, which supported the application of key concepts through paired and small-group discussion.

Parts A and B of the assignment emphasized individual inquiry and reflection. Students independently developed three questions for the chatbot and then engaged in a one-on-one conversation with the philosopher. I reviewed each student’s questions prior to the interaction and monitored submitted conversations as evidence of assignment completion. This process also functioned as an ethical safeguard. Students were informed that this oversight supported privacy, safety, and responsible AI use, and they provided informed consent.

Parts C and D introduced a collaborative dimension. Working in groups of three, students reflected on what they learned from chatbot interactions and identified responses that were inaccurate, misleading, or ethically concerning, drawing on course readings and discussions. Midway through the term, class time was dedicated to structured discussion of these experiences, supported by peer reviews of individual conversations. Brad and I co-facilitated this session, which provided feedback on how students experienced this innovative pedagogical approach.

Following peer review, each group submitted a shared critical reflection for each philosopher. These reflections highlighted key insights, challenges encountered when engaging with the chatbot, and lessons learned, as well as three remaining collaborative questions for further exploration. Groups then returned to the chatbot collectively to investigate these questions, deepening their understanding of the philosopher’s educational philosophy in preparation for the final presentation.

The final component consisted of in-class group presentations grounded in students’ cumulative independent and collaborative learning. These presentations showcased engagement with the chatbot and course materials and demonstrated the effectiveness of this AI-supported approach to teaching the history and philosophy of education.

Overall, the assignment integrated technology, ethics, reflection, and collaboration to support deep learning, foster critical engagement with philosophical ideas, and model responsible AI use within a rigorous, student-centered graduate course focused on dialogue, inquiry, and meaningful educational practice in contemporary higher education contexts.

Reflections and Recommendations

During a mid-course, facilitated discussion, we invited students to reflect on two guiding questions:

  • What ethical issues arose during your interactions with the chatbot, and how did you address them?
  • How did the chatbot activity compare to reading academic articles in terms of advantages, disadvantages, and complexity?

Students noted that the chatbot frequently prompted them to draw on personal experiences and required active engagement, often asking follow-up questions rather than simply providing information. One student observed that the chatbot’s friendly tone fostered a sense of trust and comfort, which highlighted the risk of anthropomorphizing the tool and underscored the importance of clearly communicating boundaries around sharing personal or sensitive information. Several students also commented that interacting with the chatbot encouraged them to think critically about the accuracy of the content presented as the precision and transparency of sources referenced by the chatbot were not always clear.

Despite these concerns, students generally felt that the chatbot created valuable opportunities for pausing and reflecting on how philosophical concepts resonated with their own educational experiences. Many described the interactions as more dialogic and personally meaningful than traditional reading-based approaches. Some students also noted that the chatbot’s concise responses, along with its use of metaphors and analogies, supported efficient learning when compared to longer academic texts. However, students and instructors alike acknowledged that this efficiency carries the risk of losing nuance and bypassing the productive struggle often associated with reading dense philosophical material.

Several students emphasized that once they understood the importance of asking intentional and creative questions, they were able to guide and shape the chatbot conversations effectively. They described learning how to redirect the chatbot through concise and specific prompts, which increased their sense of agency and engagement. At the same time, students identified notable limitations. One recurring concern was the potential loss of meaning when philosophers’ original texts — particularly those written in languages other than English — were translated and further mediated through the chatbot. This issue was raised most frequently in relation to Confucius. Another limitation involved the possibility of fabricated or inaccurate sources. Students acknowledged that, at times, they did not verify references because they became absorbed in the conversational flow and enjoyment of the interaction.

In terms of complexity, students expressed differing views about responsibility for accuracy. Some framed inaccuracies as a limitation of the chatbot itself, while others argued that such concerns risked shifting responsibility away from learners. These students emphasized that critical evaluation and fact-checking should remain core academic responsibilities rather than being delegated to technology.

We are currently gathering and analyzing additional student feedback to refine the activity from both technical and pedagogical perspectives. Planned revisions include further clarifying chatbot instructions to reduce bias and inaccuracies, and better balancing information provision with prompts that encourage deeper student thinking. Although AI literacy was embedded throughout the activity, feedback suggests that more explicit reflection on the tool’s limitations should be incorporated into assessment criteria.

Overall, student feedback was largely positive with many appreciating the opportunity to engage with course content in a novel and interactive way. We believe this activity demonstrates how generative AI can support learning experiences that were previously difficult to achieve. However, such approaches require intentional design, transparency, meaningful alternatives, and a commitment to ensuring students engage with course content through multiple modalities while maintaining strong human connection.


Manu Sharma is an Associate Professor at Thompson Rivers University in the Faculty of Education and Social Work. She brings a critical social justice lens to her teaching, research and service and is committed to creating equitable public education for all students. She is the founder and president of the Canadian Association for Social Justice Education and the founder of the Journal of Social Justice Education. Her research interests and publications in the field of education are based on equity initiatives, social justice pedagogy, deficit thinking, and international teaching experiences. For more information, please visit her website professormanusharma.com

Brad Forsyth is a Coordinator of Educational Technologies at Thompson Rivers University. He is interested in exploring ways that generative AI can enable new ways to create meaningful and engaging learning experiences.