What bothers me about ChatGPT and A.I.

I took way too much time to write this post, as I had a lot of difficulties expressing my thoughts, and the more I worked on what I was really wanting to say, the more I had to say. The first result was all over the place, so I started from scratch again, and this new result feels “summarised,” but I’m not sure if it’s clearer than the original or if it feels “complete” in my mind.

I decided to jump in and never mind if some parts are confusing for you, dear readers: I had a very intense month of January at work, and my brain feels like it already needs a big vacation. If you’re wondering about some of my points, please use the “reply to this post” link at the end of the post, and I’ll do my best explaining them to you if needed, if it can be explained.

Here we go.

What bothers me the most about AI, and more specifically ChatGPT — let’s just say the chatbot format in general, mostly centered toward giving answers to questions and generating text — isn’t so much the fact that it could eventually, in a couple of years from now, replace me at work for more than half of my tasks.

What bothers me the most is how many people use it in what will certainly be remembered as the wrong way.

Instead of using it as a proper tool, to get a “second opinion” for things, to get extra help on a first draft, or as a Google search alternative for a specifc type of questions, people use generative A.I. as the final answer to everything, as the final product of their work, as some sort of ultimate truth that can be used verbatim in many situations. All of this without putting much efforts into analysing the answer, or thinking for themselves for two minutes.

I see this happening far too often. Coworkers, friends, family members, friends of friends... People rarely go beyond the initial output provided by the tool. They usually stop after typing in their — poorly written — first prompt and getting their first answer. I’m not sure if it’s laziness or a belief that the tool is somehow magic.

Not only are these tools far from perfect — obviously — as we’ve seen many times in the news, but how can people use any form of technology so blindly?

It reminds me somehow of the 2000s, when Google was sometimes considered as a source of its own, and some people were saying things like “well, I’ve read this on Google,” meaning that they only glanced at the results and headlines without actually clicking and visiting the source website, where this thing called context could imply many different interpretations of the so-called “answer.”

This wasn’t a common problem back then, because Google only gave a list of results: a list of links with titles and meta descriptions. Most people would glance over a few potential sources and eventually decide which to click among the first 5, 6, or 10 results. Users had a choice of what to click, even if they barely ever visited the second page of results. If the first page links displayed by Google were not satisfying, it created frustration and people moved on.

The chatbot interface of ChatGPT is different. It only gives people one answer, something that seems finalised. There is not a selection to be made. The tool tries to simplify everything and therefore removes the possibility of human selection in the proposed answer to the prompt.

Of course, users can ask again or rephrase their prompt, but they seem to rarely do it. ChatGPT never asks for clarification if it doesn’t understand the question. Even if the prompt makes no sense, the bot will still provide an answer based on what it got from the input.

This is basically like using Google in the 2000s and only clicking on the “I’m feeling lucky” button.

Just like with Google, the issue comes from the fact that the interface is super basic: a search box or a chat window. When seeing the Google homepage or ChatGPT, no one says, “Hmm, maybe I should learn how to properly use this tool.” Because there’s apparently nothing to learn.

Today, in 2024, very few people know how to use Google efficiently. Typing requests like searching within specific websites or for specific file formats is not common knowledge, as well as using appropriate keywords, being specific enough, and avoiding queries that produce poor results.

With ChatGPT, we’ll have the same pitfalls, and it can become even worse because there is this a relation of trust embedded in the chatbot format and the perceived and popular A.I. “magic.” “Artificial intelligence” sounds so much more powerful than “search engine.” This is why I like when some people would prefer for A.I. to be called “applied statistics.” This is a term that removes a lot of “magic” and describes what it actually is better than the word “intelligence.”

Speaking of trust, I can only recommend reading Bruce Schneier’s AI and Trust essay, published on the Harvard Kennedy School website:

It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service — like a search engine.

The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.

There is also a “dumbing-down” aspect to ChatGPT when it comes to information search. People use it as an excuse to have an answer for everything without truly understanding what they typed. The process of discovery involves searching and exploring, and relying solely on the first answer provided by a robot that processes vast amounts of data eliminates this “journey” aspect.