7 min read

Artificial or Authentic? The Terminology Trap of AI

While tools like Chat GPT and other Leonardo’s appear relatively safe at first glance, the broader use of artificial intelligence across various settings and fields can have enormous consequences—ranging from freedom to imprisonment, and even matters of life and death, period.
Artificial or Authentic? The Terminology Trap of AI
Image by - Albert Stoynov - a close-up view of a network server setup, with multiple Ethernet cables connected to a "Cloud Router Switch" labeled device. The cables are grey and are neatly bundled, some with white connectors, connecting various ports on the router.

From Dartmouth to ChatGPT: The Evolution of AI and Our Misconceptions

A Very Brief History of Ubiquity

Artificial intelligence has entered our lives in ways that are sometimes obvious, but also in many other subtle ways that go unnoticed by those not paying close attention—for instance, technologies such as facial recognition being used in public spaces, automated customer service systems, predictive policing, or even algorithm-driven assessments within the parole system.

(There are many examples of uses in the parole system; if you’re interested in this specific topic, you can look up a case called Loomis v. Wisconsin, which went all the way up to the Supreme Court of the United States.)

While tools like ChatGPT and other Leonardo’s appear relatively safe at first glance, the broader use of artificial intelligence across various settings and fields can have enormous consequences—ranging from freedom to imprisonment, and even matters of life and death, period. At this point, it is totally unclear whether this technology will ultimately bring more good than harm, which is why these debates are raging. In this essay, however, I want to explore another aspect of this phenomenon.

As you’ve probably noticed, the term AI and related technologies have become ubiquitous over the past decade or so, especially since the release of the first ChatGPT in 2020. They’re everywhere, which is one of the reasons we should carefully explore every aspect of this recent phenomenon.

To Craft or Not To Craft

Let’s forget the intelligent part for a second and focus on the word ‘artificial’. The term artificial has Latin roots. It derives from artificialis, which derives from artificium. 

Ars means art, craft, or skill, while facere means ‘to make’ or ‘to do’. Together, artificium essentially means “something made by skill or craft.” 

The addition of the suffix ‘alis’ simply adds the meaning ‘concerning,’ which turns artificium into artificialis; that’s how we end up with the translation “concerning things made by skill.” 

This term doesn’t strike me as particularly adequate to describe the technologies we’re discussing here for several reasons. So, let’s continue focusing on this word, shall we?

I was interested to know when the term artificial intelligence was first used. It seems that the first usage dates back to 1956 at the Dartmouth Summer Research Project. Several pioneering scientists, among them John McCarthy and Marvin Minsky, launched this project. Their contributions were pivotal, establishing core principles of artificial intelligence that continue to shape and inspire the field even now. It was then said that:

“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

This quote comes from the original proposal for the 1956 Dartmouth Summer Research Project. There are debates as to who exactly coined the term artificial intelligence first, but that’s beyond our current focus. Of course, it’s important to know that closely related concepts had been explored by people such as Alan Turing much earlier, as early as the 1930s.

Alan Turing’s most renowned contribution came in 1950 with a paper called ‘Computing Machinery and Intelligence.’ This paper introduced what would become known as the Turing Test, which aimed to test if a machine could demonstrate intelligent behavior indistinguishable from humans. This paper also posed the question, ‘Can machines think?’

Shifting Meaning

One of the underlying problems I see with the term ‘artificial’ is the significant shift in meaning that has occurred over the centuries. At first, it did literally mean “concerning things made by skill.” Today, however, that’s not what comes to mind at all when we hear or see this word. Gradually, the term acquired a somewhat pejorative meaning, implying an underlying ‘fakeness.’ It’s artificial, synthetic, so it is not authentic.

The Industrial Revolution(from the late 18th to early 19th century) marked a significant transformation in manufacturing processes. Before this period, goods were predominantly handcrafted. Artisans were meticulously creating each item, thus, there was a sense of authenticity and uniqueness. The advent of industrialization introduced mass production of goods.

This shift contributed to a perception of these goods as ‘artificial,’ contrasting with the ‘natural’ quality of handmade items. Over time, ‘artificial’ began to carry these connotations of being manufactured, less genuine, or even deceptive and inferior. This has since reflected societal concerns about the loss of craftsmanship and the rise of impersonal production methods.

There is also the connotation of weirdness. This might be one of the reasons why people were so stunned when artificial intelligence beat the human champion at Go. Go is considered one of the most complex board games due to its vast number of possible moves, which makes mastering it a significant achievement for machines, as it demonstrates the ability to handle strategic depth and creativity previously thought to be uniquely human. 

How could something inferior achieve that? It is counterintuitive, but did you ever think that our perception of the word artificial might have fooled us?

The most essential bias related to the word ‘artificial’ is that this word sounds unreal. As in, if something is artificial, it is not really real; it’s artificial. This connotation might be the most harmful of all because, of course, artificial intelligence is real—extremely real—especially for the many people who lost their jobs in silence, those who have been imprisoned by way of algorithmically assisted decision-making, or for 14-year-old Sewell Setzer III, who lost himself into a chatbot and took his own life after a short, final conversation.

“What if I told you I could come home right now?” the boy asked.
“...please do, my sweet king,” Dany the bot replied.

And that was it. He used his stepfather’s .45 caliber handgun.

The word ‘artificial’ in artificial intelligence is inadequate because of all the connotations I have laid out above. As all these connotations blend together, it gives a sensation of vagueness and distance—like it just floats out there in the cloud somewhere within the hazy, ungraspable internet—while AI-enabled drones also float there in the real sky, and they kill. 

This chasm between our perception of the term ‘artificial intelligence’ and the tangible reality is, in my opinion, extremely harmful. Also harmful is the subconscious false sense of security that might be derived from it. It’s artificial, so it’s not real; then, as it is not real, it can’t be dangerous. Unless this is put under the spotlight—that is, raised to consciousness—it mainly occurs subconsciously, which adds to the potential danger.

This technology is not artificial in the sense the word has today; it’s different, utterly different. We also know that what is perceived as different is often scary, but in this case, We should use appropriate words to be ready to face whatever truly will happen. Whatever the future might bring, it will be different, and a response based on fear or denial won’t cut it. In this regard, I have an alternative I will talk to you about a little later.

Don’t Be Misled

Nowadays, as you have seen yourself, people throw the term ‘artificial intelligence’ all over the place without any restraint whatsoever, but worse, without at any moment rethinking if and how this term might or might not indeed be appropriate to the technologies they’re discussing. The desire to attract clicks, views, and listeners is not a good enough excuse to act this way. How often have we seen titles like

‘My ten best prompts for ChatGPT 4o—this will change your life.’

(ChatGPT is also a clickbait term, of course.) Or less sophisticated titles such as

‘How to use artificial intelligence to 10x your productivity.’

We need to be honest here. If people genuinely think that such titles and posts will make them stand out, they are highly shortsighted because it’s precisely the kind of content that ChatGPT and others ‘excel’ at writing. It doesn’t involve the analysis of emotions, the need for elaborate thinking, or anything related to sensations and senses; in other words, it rarely involves any complexity whatsoever.

While it is known that any level of complexity can deter some readers from content, the human brain is nonetheless an extremely complex organ and tool, and that’s why we’re here today, surrounded by technology and overall living better lives than our ancestors.

Obviously, the people writing such posts are million miles away from asking themselves if the terms they’re using are even adequate. That’s one tragedy, but not the only tragedy. I must say that I’m not willing to point fingers at people—even though, in a sense, I did—but I hope this critique won’t be felt as negative; instead, it is more of an encouragement to learn and to think more deeply than anything else.

At a time when the internet is flooded with content, whether one is a content creator, a consumer of content, or a parent, it’s crucial to shift toward reflecting more deeply on this phenomenon, as it does have tremendous impacts on our lives. So, I caution against thinking some will be spared, because they won’t.

Introducing Kitsigen Intelligence

All is not bleak—far from it—and I think it is the right time to introduce the concept of Kitsigen Intelligence. I created this term because I felt an urge to bring a little bit of order to the AI-related chaos that is occurring across the planet right now. So what is Kitsigen Intelligence, you rightfully ask? The answer will be revealed next week, in the second part of this essay.


Sources:

McCarthy, John, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 1955.

Richardson, Rashida, Jason Schultz, and Kate Crawford. "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice." New York University Law Review Online, vol. 94, 2019, pp. 192-233.

Silver, David, et al. "Mastering the Game of Go with Deep Neural Networks and Tree Search." Nature, vol. 529, no. 7587, 2016, pp. 484-489.

State v. Loomis. 881 N.W.2d 749. Wisconsin Supreme Court, 2016.

Turing, A. M. "Computing Machinery and Intelligence." Mind, vol. 59, no. 236, 1950, pp. 433-460.