The Use of Artificial Intelligence Text Generators in Academic Writing

Mohammed AbdEllah - Aug 26 - - Dev Community

Artificial intelligence (AI) tools can help in thinking, but cannot think; therefore they can help in writing, but cannot come up with original texts. These tools are computer programs that imitate the human brain and use AI to generate written content out of massive amounts of data. This content is a prompt to the user's request. Some of the famous AI tools are ChatGPT, Gemini, Jasper, INK Editor, and Writesonic.

Writing in academic settings is usually a task to be delivered or a research paper to be presented. The objective of these tasks is to let students gain profound knowledge about a given subject by searching, reading, analyzing, and finally writing about it. Relying on AI tools undermines this process and prevents students from learning through writing. Additionally, research papers are presented to add new knowledge to a certain field of literature. AI tools, so far, have nothing to add to the body of knowledge since the prompt is collected from previously presented works.

On the other hand, AI could help in proofreading and generating designs, which helps save time and reduce cost. AI could help in brainstorming too; "using those brainstormed ideas to write something in your own words with your own research can qualify as proper use, especially when the final work states that you used AI in the initial stages." (Turnitin, 2023)

There are several considerations when it comes to the use of AI in academic writing, including plagiarism, AI hallucination, biased responses, and outdated data.
The AI tools generate the text from datasets which raises the possibility of generating plagiarized content. Furthermore, copying the generated text and pasting it in an academic paper is plagiarism as it is using the work of others without mentioning the source of it. Moreover, AI models rely on massive amounts of data to generate the content; what is supposed to be new content may reinforce outdated scientific theory or ignore recent research.

In addition to the potential plagiarism, AI tools may deliver inaccurate prompts, this is called AI hallucination. "AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate", states IBM (2023). This hallucination most likely occurs as a result of developing errors, wrong data, and lack of context. So it is wrong to rely blindly on the prompts. For example, an AI model that is trained on a dataset of medical images may learn to identify cancer cells. However, if the dataset does not include any images of healthy tissue, the AI model may incorrectly predict that healthy tissue is cancerous. (Google Cloud, 2024)

Biased content is a significant threat to the paper's objectivity as the generated content may reflect an existing bias in the data; Algorithmic bias may lead to biased responses too as the algorithms used to train AI models can inadvertently amplify biases present in the data. For example, certain algorithms might prioritize keywords or concepts associated with dominant groups in a field, leading to biased recommendations or conclusions.

.
Terabox Video Player