By Irina Tzoneva, Instructor, Douglas College
As part of the BCcampus Research Fellows Program, I am pleased to share a summary of my research on how university students use generative AI in their writing practices, as well as the extent to which instructors can distinguish between student-authored and AI-assisted texts. In addition, the study explored students’ experiences, concerns, and motivations related to AI use. The study addressed two central research questions:
- Can instructors reliably differentiate between student-written and AI-assisted text?
- What challenges and opportunities do AI tools present for students engaging in academic writing?
Study Design and Methods
This study used a qualitative design. I received ethics approval from the Douglas College Research Ethics Board before the commencement of the study. The participants included undergraduate students enrolled at a college in the B.C. Lower Mainland during the 2024/2025 academic year. Participants were recruited through classroom and Blackboard learning management system announcements, as well as flyers. Inclusion criteria required students to be between 18 and 25 years old and have a proficient understanding of English. No additional demographic requirements were set beyond voluntary participation.
Among the 13 participants, five identified as male and eight as female. Their ages ranged from 18 to 25 with the average age being 20.9 years. Eight participants identified as Asian, four as Caucasian, and one as Persian. Participants represented a mix of academic disciplines and were in their first, second, third, or fourth year of study for an average of two years of post-secondary education. All participants received a $5 gift card in appreciation for their involvement in the study.
Data collection involved two writing tasks and a semi-structured interview. First, participants wrote a short essay under controlled conditions without access to library resources or AI tools. This allowed us to assess their traditional research, writing, and citation skills. Next, they wrote a short essay at home in which they were encouraged to use AI tools.
Both writing samples were evaluated through a readability analysis to compare linguistic complexity and reading ease across controlled and AI-assisted conditions. Two experienced faculty members independently reviewed all writing samples to determine whether the texts appear to be AI-generated or student-authored.
Participants also took part in individual semi-structured interviews to discuss their experiences with AI, including perceived challenges, opportunities, and concerns related to academic integrity. Interview transcripts were analyzed thematically to identify patterns in students’ perceptions and practices related to AI-supported academic work.
Findings
Analysis of readability and linguistic features of the writing samples revealed consistent differences between essays produced with AI assistance and those produced without it. The AI-assisted texts tended to be more complex at the sentence and word level, whereas the non-AI-assisted texts were more complex at the structural level.
Applying formulas to measure reading difficulty revealed that the AI-assisted texts were harder to read because of their greater linguistic density and more demanding sentence structure. In contrast, the non-AI-assisted texts had higher structural complexity, particularly in paragraph organization. They contained more sentences per paragraph, demonstrating a more traditional academic paragraph structure. While AI-assisted texts contained more sentences overall, these were spread across a larger number of paragraphs.
At the sentence level, AI-assisted texts contained shorter sentences with more characters per word, indicating the use of multisyllabic vocabulary. The non-AI-assisted texts contained longer sentences with fewer characters per word. The total word counts for the AI- and non-AI-assisted texts were relatively similar, although AI samples had slightly lower word count and contained more characters overall, which suggests denser vocabulary and more elaborate word choice.
Taken together, these differences demonstrate that AI-assisted texts tend to emphasize lexical sophistication and readability complexity, while non-AI-assisted texts reflect more organically developed academic discourse patterns.
The raters correctly classified 69% of the samples, suggesting that distinguishing AI-generated writing from student-authored writing remains challenging in a substantial number of cases. The raters also assessed each writing sample on language quality, text structure, content quality, and overall writing quality. These evaluations were, on average, similar across both AI-assisted and non-AI-assisted writing samples. This suggests that although AI-assisted writing may differ from student writing, these differences did not influence expert perceptions of quality. Moreover, the results indicate that AI-generated and student-authored texts share sufficient surface and discourse-level characteristics to make reliable detection difficult by experienced instructors.
Analysis of the interview data revealed five themes reflecting students’ experiences, perceptions, and concerns regarding the use of AI tools in academic writing:
- AI as a writing support tool
Participants consistently described AI as a writing support tool — particularly for editing, grammar correction, and clarity — rather than as a replacement for their own writing. Some students shared that AI substantially reduced the time required for assignment revisions, while others noted that editing with AI helped maintain the writing flow of their work. Several participants also used AI for brainstorming or to help them express complex thoughts, though they emphasized that the core arguments remained their own. - Strategic and selective use of AI
Students used AI strategically based on the demands of their specific assignments. For example, some used AI only when they were stuck or needed help with paper outlines or fact-checking. Others adopted a more cautious approach because of their concerns about institutional rules and potential academic consequences. Since students were, at times, unsure what was permitted by instructors regarding AI use, they either limited their AI use to grammar correction or avoided AI entirely. - Ethical considerations and academic integrity
Issues of academic integrity were central to participants’ experiences. Students expressed concerns about unintentional plagiarism, the accuracy of AI-generated information, and the ethics of presenting AI-assisted writing as their own. - Institutional ambiguity and inconsistent expectations
Students stated that they are confused because course instructors were inconsistent with their expectations. While some gave explicit guidelines on acceptable AI use in their course syllabi and assignment descriptions, others only provided general warning statements or did not offer any guidance at all. - Need for AI literacy education
Given these challenges, students expressed a strong need for clearer institutional guidance and formal AI literacy education, such as workshops, short learning modules, or discipline-specific resources, that would help them understand how to use AI tools ethically, effectively, and within academic expectations. Participants shared that AI literacy education would not only support responsible AI use but also reduce anxiety provoked from inconsistent institutional policies.
Overall, students viewed AI as a valuable academic writing tool when used cautiously. They strongly believed that meaningful learning requires their own ideas and decision making with AI being an assistant, not a replacement for their thinking. Their perspectives highlight the importance of institutional clarity and intentional AI literacy initiatives in post-secondary settings.
Conclusion
This study demonstrated that AI-assisted and non-AI-assisted student writing differ. AI-generated texts exhibit greater lexical density and readability complexity while student-authored texts reflect more conventional academic writing. Although faculty raters were able to distinguish between the two types of writing with moderate accuracy, their evaluations of overall text quality were comparable, highlighting the difficulty of reliably identifying AI use based solely on surface-level text characteristics. Students’ perspectives further highlight that AI serves primarily as a supplemental tool for writing rather than a replacement for their own work. The study findings emphasize the need for clearer institutional policies, structured AI literacy education, and pedagogical strategies that support both responsible AI use and the continued development of students’ independent writing and critical thinking skills.
This study has several limitations that should be acknowledged. The small sample size limits the generalizability of the findings, particularly given the diversity of student experiences, disciplines, and levels of familiarity with AI in post-secondary settings. The study was also conducted at a single institution, which may not reflect broader institutional cultures, policy environments, or student practices elsewhere. Future research should examine larger and more diverse student populations while considering how discipline-specific expectations shape AI use and how writing development evolves when AI is integrated more intentionally into instruction. Investigating faculty perspectives and the impact of AI literacy interventions would also provide a more comprehensive understanding of how AI is integrated into academic environments.