To err is… inhuman

To err is… inhuman
Online, no-one knows you are an AI.

ChatGPT is making waves, alternately impressing and distressing folks by just how human it appears. For those who have not already come across this phenomenon recently, ChatGPT is an artificial intelligence (AI) that purports to answer research questions put to it. Because of the confusingly search engine like interface, students are beginning to treat ChatGPT like a search engine. Sadly, this is a dangerous mistake.

Never mind the quality; marvel at the wording

Large language model based AI are excellent at helping people to write clearly and effectively, provided they understand what they are writing about and know precisely what they are trying to say. ChatGPT is very good at emulating the writing style of individuals based on their existing published writing (from articles to tweets), but like other AI bots that have been withdrawn beforehand, the AI shows some disturbing ‘personality’ traits. The problem is that it presents itself like a search engine, but when asked to do real research it follows in the steps of earlier AI, panics, and makes things up. Only it does it very convincingly, offering fabricated quotes and fake citations in a seeming attempt to appease its human overlord. In a strange way, these unthinking AI are all too human.

Reality is in the screen of the beholder

Meta’s hurriedly withdrawn Galactica AI showed a disturbing tendency to bring together unrelated research using the shoddy reasoning of a conspiracy theorist, while librarians have now caught ChatGPT fabricating quotes that never existed. According to NY Times journalist Kevin Roose, this tendency of AI like ChatGPT to make up “facts that have no tether to reality” is a well-known phenomenon known in the trade as “hallucination”. When challenged, ChatGPT apologised for the error and fabricated a brand new quote to back up its false claim. As one observer (@market4neolib) commented on Twitter.

ChatGPT seems to have been imbued with Male Answer Syndrome. They need to tell it that saying “I dunno” is ok.”

Jamie Crossota (@market4neolib)

Biology professor Carl Bergstrom was equally abrasive when he described Galactica as a “random bullshit generator.” Neither AI is deliberately trying to deceive us; they are simply doing what they have been asked: to find information matching a question and to recognise and string together words in ways that sound authoritative and convincing.

This tendency to synthesise an inability to admit when the information is not readily available when the AI does not have a reliable answer or does not understand the question or its answer is deeply worrying because presentation matters, and what these large language model (LLM) based AIs are very good at is couching their wrong answers in deeply impressive prose. They sound correct, and it takes a lot of work to go back to the original sources and check whether they are quoting real research or making things up out of what almost looks like a desperate need to please. Most people have learned to make a cursory judgement of the likely quality of a publication by assessing the quality of its writing. You are doubtless doing so with this blog post.

Faster is not necessarily better

In the creative industries, AI generated content is becoming a major problem. Even when it is not being used to emulate particular artists’ or authors’ works so closely that it constitutes plagiarism, AI is capable of burying human the competition. In the words of Clarkesworld founder, science fiction magazine editor Neil Clarke:

They’re [AIs are] not trying to kill us. They’re just inadvertently burying us in all of their junk. I don’t think an AI is ever going to necessarily be any better than our best writer. The problem is not the quality. The problem is how quickly it can produce. It’s drowning out the other works, just by volume. That’s really what I see as the core issue we’re experiencing right now. It’s the quantity problem.

Neil Clarke

So what now?

While these AI might have a role to play in helping people to express their thoughts more clearly, when they write new content from scratch, their ability to generate fake news, convincingly written misinformation, and drown out carefully researched discoveries in a deluge of half-baked language analysis. To misquote a local coffee shop slogan, “Good research ain’t cheap, and cheap research ain’t good.”

So, the lesson here appears to be, don’t trust anything ChatGPT or any other AI-powered search tells you. They might be able to polish your wording (I’m using Grammarly to help improve my terrible grammar as I type this, which is at heart a very similar AI tool) but these AI cannot carry out the critical research required for your assignments (or research papers!) in the way you can. They are tools with a very specific purpose and asking them to do more than they were designed for – asking them to research or write your assignments, for instance – quickly exposes their limitations.

Assistant Librarian (Promotions) at the University Library. An enthusiastic advocate of libraries, diversity, inclusion, equity, and social justice for all, inside and outside the workplace.

2 comments on “To err is… inhuman
  1. ‘Male Answer Syndrome’ – is that an acceptable term in 2023?
    (If it was referring to any other gender there would definitely be a backlash!)
    Luckily I’m not easily offended, and only make up answers to 80% of things I don’t know…

    • That was a quote, rather than an endorsement. I understand your discomfort but there seems to be an abundance of scholarly support in the psychological and sociological literature to raise concern that people given an artificial advantage (what sociologists have unhappily termed “unearned privilege”) tend to condescend to those without, and cis-gendered men are very definitely a privileged group, and even as a committed feminist, I keep catching myself ‘mansplaining’ the blatantly obvious to women who know at least as much as me, in part because as a man I have somehow absorbed a cultural mandate that inflated sense of my own superiority and imposed an expectation that I would dominate and ‘lead from the front’. It’s even been proven even in board games where some people play with more advantageous rules than other players that those players come to believe in their own superiority even when the rules are blatantly unfair. Returning to your original point, making a sweeping generalisation against women or nonbinary folk would meet much greater resistance because it would simply be untrue and unfair. Such a claim would also echo the strategy adopted by so many modern oppressors that despite wielding obscene levels of power over others that they are somehow the victims of a mythical ‘minority privilege’ – that any limits to their abuse of power is unfair and unacceptable.

Leave a Comment (note: all comments are moderated)

Your email address will not be published. Required fields are marked *

(you can use <b>bold</b> or <i>italic</i> markers)


This site uses Akismet to reduce spam. Learn how your comment data is processed.