AI and Academic Writing

Where are we with AI and academic writing? Frankly the situation is a little chaotic. The institution line is still often that of ‘academic offence’, even though the institutions know this would be extremely hard to enforce. One reason for this being that anti-plagiarism software like Turnitin are in a perpetual catchup mode with the AI available, so unless everyone sticks with chatgpt (free on openai (they won’t)) then Turnitin’s detection algorithms will be outpaced by newcomers and rephrasers. Another one is that sometimes students write in styles identical to (an) AI. The common form of this is the very capable second (or more) language student; owing to the academic way the language is often learned, these students follow precise rules and sometimes do it extremely well. In turn, they follow the rules into stylistic use and end up sounding sufficiently like chatgpt that the software (and sometimes staff) pick it up, in turn they end up hauled in front of some disciplinary body just for being extremely smart.

This means that enforcement is hard to achieve, as one has to coordinate appearance of AI like style with some other evidence e.g. sudden alteration is style. This is possible, but time consuming for academics and if the student goes straight in with using the AI throughtout, then no style change detection will be possible; rephrasing software compounds the issue. Bearing in mind we’re in the total infancy of this technology, this is a difficult situation. I say difficult with no little thought; the situation is difficult not because of a negative connotation of difficulty, but rather because it is literally difficult to know what to do from here.

The essential question being ‘is assessment by academic writing in its current form a dead horse that we need to stop flogging?’ and if it isn’t dead yet, how long before it is dead (if indeed it will be dead at some point)? How will we know? Personally I would say it isn’t dead yet, but its death is probably between 2-5 years away. How will we know? We’ll know because the ability of AI to construct academic writing for students (and staff) will have permanently outstripped our ability to detect it either with software or with our minds.

That is, both in content and style, AI will produce work for students who wish to use it that will mean, if they don’t want to, then at least for the written components, their engagement with the material can be pretty much nil. Furthermore any student who, let’s say for integrity reasons, chooses to write their own work, may find themselves penalised by handicapping themselves to their human writing skills. Thus their integrity will get them quite possibly a lesser grade than their AI using colleagues.

But as we’re not there (yet) what can we do in this strange hinterland? This issue itself seems related to the future of AI and our interactions with it. That is, how guilty we feel about the interactions that we encourage, turns partially on what it will become. However since we cannot know where we are headed we don’t know how guilty to feel. What do I mean by ‘feeling guilty’? I mean this sense that we are cheating when we get AI to do work for us. Isn’t this a kind of crucial border, this meeting place between a legitimate productive use and losing part of ourselves which we possibly need to preserve?

Maybe we can sketch out two broad trajectories. In one, AI supplants our need for writing skills as it can produce any text we need more accurately and with greater detail than we can achieve. In another, writing skills continue to be needed because AI continues to fail to capture human synthetic abilities to generate insights. Because these insights were formed from human generated cognitive concatenations (consciously or unconsciously) the argumentative structures cannot be automatically written up by the AI and hence the ability to lay out the argument etc is still needed.

What is obvious is the blur of these heuristics. The former seems strange insofar as it indicates that whatever we want to write on, the AI can do it for us. This aligns this trajectory roughly with what some (mostly undergraduate) students might use it for, whilst the latter one seems more indicative of research usage.

The blur occurs because in the first case the student will still have an idea that they want the AI to write the essay on (admitting they also might not). Either way they have to engage with the AI and unless they literally want to hand in the first thing it writes, they have to do some thinking and engaging. No one is saying this minimal engagement is a good thing, it just means that even the laziest version has to have some effort in it. The second trajectory suggests that writing is still needed, however once the researcher has had this synthesising insight, whilst the AI may not be able to reconstruct their argument by itself, it can certainly help if you give it the different propositions and ask for paragraphs to be constructed around them. The point generally being that with the second trajectory, unless the academic is a kind of purist, doesn’t deny that AI could be used to help out with the writing.

It seems fairly clear that trajectory one we want to avoid, yet trajectory two could easily encompass quite a lot of AI written input. It seems to me the crucial part here was the academic’s synthesising idea. This idea was only made possible by the reading and thinking (conscious and unconscious) that the academic did. This reminds us that of course what is important in the educational/research process is actually comprehension. The first option strikes us as so bad, because comprehension is extremely low. I tried to highlight how the redeeming part or trajectory one is that it is on a gradient on which some students will at least have an idea on the topic, that they then get the AI to write the paper and then they read it to make sure it’s good. This redeeming aspect is their thinking engagement and comprehension.

Going forward with AI we need to find ways to emphasise comprehension of subject matters. We also need to accept the potential of AI to write for us, to help us write our ideas. The danger does lie in the lack of comprehension, but arguably there is a lot of lack of comphrension already, AI is just bringing out of the system the latent lack of student integrity and exposing it.

Academic writing in the traditional sense may well be ultimately largely supplanted by AI, but academic reading (and all other forms of learning, argument formation and thinking) cannot be allowed to do so. Indeed, in exposing the possible lack of motivation in the system, we can use this to think of new ways to engage students in understanding their subjects and helping them want to understand their subjects. The best the AI can be for us is probably be a new interlocutor. As soon as we have our new research insight, it goes into the system (the available research). From here it can be accessed by the AI to help other researchers, who must think carefully and through their own multiple inputs create new insights.

So the guilt issue should not be view so much as an issue with writing; it’s an issue with comprehension. We need to absolve ourselves of this nebulous guilt by the best practice of writing with AI and ensure that we remain active comprehenders, processors and producers of information —as opposed to passive receivers of AI insights. So long as we are exercising our capacities to think and comprehend to the best of our ability, then the AI becomes a partner that could be incredibly empowering. The danger lies in our, handing cognition and production over to it.

Leave a Reply

Your email address will not be published. Required fields are marked *