'Not quite eye to A.I.: student and teacher perspectives on the use of generative artificial intelligence in the writing process' by Alex Barrett and Austin Pack in (2023) 20(59) International Journal of Educational Technology in Higher Education comments
Generative artificial intelligence (GenAI) can be used to author academic texts at a similar level to what humans are capable of, causing concern about its misuse in education. Addressing the role of GenAI in teaching and learning has become an urgent task. This study reports the results of a survey comparing educators’ (n = 68) and university students’ (n = 158) perceptions on the appropriate use of GenAI in the writing process. The survey included representations of user prompts and output from ChatGPT, a GenAI chatbot, for each of six tasks of the writing process (brainstorming, outlining, writing, revising, feedback, and evaluating). Survey respondents were asked to differentiate between various uses of GenAI for these tasks, which were divided between student and teacher use. Results indicate minor disagreement between students and teachers on acceptable use of GenAI tools in the writing process, as well as classroom and institutional-level lack of preparedness for GenAI. These results imply the need for explicit guidelines and teacher professional development on the use of GenAI in educational contexts. This study can contribute to evidence-based guidelines on the integration of GenAI in teaching and learning.
Public interest in artificial intelligence (AI) has grown substantially as a result of recent public access to large language models (LLMs; e.g., OpenAI’s GPT-3 and 4, Google’s PaLM 1 and 2), and chatbots (e.g., OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Bing) that allow users to interface with LLMs. These Generative AI (GenAI) tools afford individuals with the ability to instantly generate writing on any topic by inputting a simple prompt. The public discourse surrounding GenAI has been mostly positive, but in the education sector there is serious concern about academic integrity and plagiarism (Dehouche, 2021; Lampropoulos et al., 2023; Sullivan et al., 2023; Yeo, 2023). Some schools have responded by banning the technology outright (Yang, 2023), a move likened by some to the banning of the pocket calculator when it was perceived as a threat to math education (Urlaub & Dessein, 2022). What is clear is that this new technology possesses disruptive potential and that institutions which have relied heavily on student writing for education and assessment will need to respond accordingly.
Although a few schools have banned ChatGPT and similar tools, many have not, displaying confidence that their institution’s academic integrity policy is robust enough to accommodate the new technology. However, current definitions of plagiarism have been described as medieval (Dehouche, 2021; Sadeghi, 2019), typically including language such as kidnapping, stealing or misappropriating the work of others (Sutherland-Smith, 2005), which now leads us to question whether a chatbot counts as one of these others. Generative AI is trained on a selection of diverse natural language data from across the Internet which allows it to string together unique combinations of words and phrases, similar to how humans learn to produce an unlimited amount of novel spoken or written text from the limited language they absorb from their environment, a tenet of generative grammar (Chomsky, 1991). The result is that there is no identifiable other whose work is being stolen by a chatbot. To complicate matters, the language of OpenAI’s Terms of Use state that it assigns users “all its right, title and interest in and to Output” from ChatGPT, including for purposes of publication (OpenAI, 2023). Any practiced educator would likely agree that submitting an essay written by ChatGPT without disclosure violates academic integrity, but students may not readily see a problem with it.
Although GenAI has multiple applications, its use as an authoring tool in programs like ChatGPT allow for easy misuse. Students who have purposefully violated academic integrity in the past through the use of contract cheating or paper mills will likely not hesitate to use ChatGPT or other GenAI tools to do so now, but other students will need guidance on how to avoid inadvertently cheating. Student perceptions of academic dishonesty have historically been unclear or incomprehensive, and rarely align with teacher expectations (Tatum, 2022), GenAI will only serve to complicate this (Farrokhnia et al., 2023).
Some advocate working towards a coexistence with AI in education by establishing common goals and guided exploration of the limitations of the technology (Godwin-Jones, 2022; Tseng & Warschauer, 2023). Yeo (2023) has specifically recommended the exploration of student perceptions about the ethics of using GenAI tools, and Pack and Maloney (2023a) suggested teacher and researcher use should also be investigated.
To date no consensus has arisen regarding what constitutes appropriate use of GenAI in higher education. Therefore, with the goal of identifying some common expectations, the purpose of this study is to explore student and teacher perspectives of using GenAI for various tasks in the writing process, including brainstorming, outlining, writing, and revising done by students, and evaluation and feedback done by teachers. The research questions guiding the study are:
1. What are undergraduate students’ and teachers’ perspectives on using GenAI in the writing process (brainstorming, outlining, writing, revising, evaluation, and feedback)?
2. How do student and teacher perspectives on the use of GenAI in the writing process compare?