Bots
Sometimes the most difficult thing about writing is composing the first sentence because it’s difficult to know where to start. In “The Sound of Music,” Maria was having a difficult time teaching the Von Trapp children how to sing. So she sang, “Let’s start at the very beginning,” and the clever song “Do-Re-Me” was the result.
This essay began last week because I can’t stop thinking about my encounter with artificial intelligence (AI) or perhaps a bot, short for robot. If you missed last week’s essay, “Human,” go to The Knox Focus archives and check it out.
It seems everyone is writing about, or at least considering, the new AI technology, robots and machine learning. Computers have been with us for 50 years, but many believe we are at the dawn of a new age where machines can be ordered to collect data, organize the information and then respond in a human-like, if not cogent manner. Machines do not think or operate autonomously, but the technology is developing very rapidly, some say too rapidly.
As a science fiction writer, I’ve been thinking about machine learning for a long time. Most remember the movie “2001: A Space Odyssey” where the Hal 9000 computer became autonomously functioning and murderous. And who could forget “The Terminator,” a cybernetic, homicidal robot from the future? Interestingly, the term cybernetics stems from the Greek word for helmsman and refers to the science of control and communications functions. Cybernetics is loosely associated with the concept of artificial intelligence.
Robots that perform repetitive tasks on industrial assembly lines are commonplace. Interestingly, the principal Webster’s definition of a robot is an entity “resembling a human or living creature which performs complicated life-like tasks.” Increasingly, we find reports of robotic dogs used in security or warfare. And I’ve watched videos of humanoid robots walking in a life-like manner and then doing flips, which are certainly beyond my abilities. However, at this point, developers have not been able to impart basic human functions to robots such as sentience, the ability to experience feelings such as pain, pleasure or self-awareness.
The modern polymath Isaac Asimov was also a celebrated science fiction writer. In many of his novels, he wrote of a futuristic society where robots coexisted with humans and operated on Asimov’s four laws of robotics:
– 1st law: A robot may not harm a human or allow a human to be harmed.
– 2nd law: A robot must obey a human unless the first law is violated
– 3rd law: A robot must protect its own existence unless this violates the 1st or 2nd law
– 4th law: A robot must protect its own existence unless this violates laws one – three or causes greater harm to humanity as a whole.
I could live with autonomously functioning robots who operated under these laws. I already live among humans who don’t seem to operate under any of these precepts.
Alan Turing was a computer scientist who developed the Turing Machine, “considered a model of a general-purpose computer.” Turing’s machine broke the Germans’ WW II Enigma code, allowing the Allies to know of German war plans. The story was told in the 2014 movie “The Imitation Game.”
No sane human would claim infallibility. Aside from sociopaths/psychopaths, people have a basic sense of right and wrong. C.S. Lewis called this basic notion of the human conscience a sense of “ought”: I ought or ought not to do something.
But does a computer or an AI program have a sense of ought? Would you trust Microsoft’s AI Copilot, Google’s Gemini, Elon Musk’s Grok or OpenAI’s ChatGPT? The Associated Press recently reported a study done by the Center for Countering Digital Hate. The study found that “ChatGPT will tell 13-year-olds how to get drunk and high, how to conceal eating disorders or compose a suicide letter to their parents.” This AI program (or any other) does not possess basic human functions or a sense of right and wrong (morality). In fact, Elon Musk worries about self-actualizing AI systems that have no checkpoints or morality and pose a risk to humanity.
AI options are readily available, including medical chat boxes where you can enter symptoms, and the AI renders a diagnosis. People are frequently using these programs to check their doctor’s opinion and diagnosis. And there are reports of lawyers using AI to generate court filings, which are sometimes found to be erroneous. And professors are having to deal with AI-fabricated student work.
I no longer practice medicine, but I would not refuse to consider an AI analysis of a patient’s symptoms or my diagnosis. A friend of mine says, “None of us is as smart as all of us together.” I agree.
The next time you ask the AI a question, remember the machine program is designed to survey the internet for relevant data, collate the data and spit out an answer. It is not capable of functioning with a sense of “ought.”
And you might think a computer program isn’t political, but its human programmers have biases, and they work for human organizations. And remember, inaccuracies are prevalent in the media, even in supposedly scientific reports.
Inaccuracies on the internet led Grok to recently make an erroneous report, which had to be amended. Grok acknowledged its mistake when humans pointed out the erroneous data used for its initial report.
What is certain is that there are no absolutes, and most of life is based on probability rather than possibility. Computer programs don’t possess morality. Nor do computer programs have the nuanced interpretation skills of a human.
If we don’t run out of electricity to drive these AI systems, I suspect they may someday approach sentience. Alan Turing developed the eponymous Turing Test, which is used to “test a machine’s ability to exhibit intelligent behavior like a human.” It involves comparing the responses to questions by a machine and a human as judged by a human evaluator. An intelligent machine gives answers comparable to those of a human. We’re not there yet.