Chat Bot Sentience? Nah…


Daniella Fragile

Picture created by Daniella “D.J” Fragile, editor of The Eagle Dispatch.

Eram Ashir and Daniella "D.J." Fragile

Imagine if a chatbot could give reasonable answers to questions, create software to perform given tasks, write essays, and more. Recently, stories circulated about a “bot” giving sentient responses to questions, creating mild panic that we might one day be bested by bots. But what’s the real story?

In November 2022, an AI company, OpenAI, released an AI chatbot named ChatGPT. ChatGPT is a large language model AI system. These systems are built to generate text based on example texts that they have already seen and been trained on. Over a period of a few weeks, the AI system uses large computers to process significant examples of text from various sources on various topics. The AI system was then given various prompts and mimicked the example text to generate answers and responses similar to these examples. After enough trials and repetitions, the AI system was able to generate advanced human-like text, as it was trained to do. So, ChatGPT can be given a question or a prompt about various topics in natural language, and it generates human-like answers and human-like dialogue, but this doesn’t mean that it’s sentient. 

Although ChatGPT seems like it is intelligent, it lacks the basic human ability to reason and feel on its own (hence, no true sentience). Instead, ChatGPT is a sophisticated parrot, mimicking example texts to generate human-like responses. The information that ChatGPT gives in its responses are all obtained from its vast training data and examples which are, you guessed it, human-generated. Additionally, ChatGPT can give wrong answers, as it does not have access to any external information beyond the data upon which it was trained. So, sleep soundly humans. You are not (yet) being taken over.