AI is cool, but humans possess the real magic: Gilligan knew it all along
Recently, the GPS app on my phone apologized to me. Apparently, it had incorrectly calculated my route, and it abruptly said, “Sorry about that,” and gave me a new direction.
Thinking my device may have taken our relationship to a new level, I replied, “That’s okay.”
The GPS immediately and without pause went back to barking orders, commanding me with directives to stay in the right lane and make a turn at a designated intersection.
Our moment of almost-human interaction was fleeting. It was a programmed response, meant to simulate real-life conversation, but falling flat.
Did I expect to develop some sort of person-like friendship with my GPS? No, because ultimately, the GPS is a machine.
There has been a lot of talk lately about Artificial Intelligence, or AI, and best practices in implementing it.
Newspaper publishing giant Gannett said in June that it would tread slowly and carefully when using AI, assuring that there was human oversight.
“The desire to go fast was a mistake for some of the other news services,” said Renn Turiano, senior vice president and head of product at Gannett in a June 16 Rueters article. “We’re not making that mistake.”
Fast forward just a couple of months, and it looks like Gannett did indeed make the mistake it said it was going to avoid. In August, Gannett used Lede AI, an automated reporting system, to produce sports briefs. Several news outlets called them out on this, reporting about Gannett's guffaw-producing stories that sounded like they were written by robots using the wording of someone who had no knowledge of sports.
The original stories have all been updated by humans, but thanks to the Internet Archive’s Wayback Machine, an original story is preserved in cyberspace. Take note of the first sentence: “The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday.”
Oof! That had to be embarrassing.
In July, Saco Bay News contributing writer Randy Seaver wrote a profile on new Old Orchard Beach High School principal Michael Rodriguez. Unlike most stories in our hyper-local publication, this one had national appeal, as Mr. Rodriguez had been the assistant superintendent in the Uvalde, Texas school district where a tragic shooting took place.
Within hours after Randy’s story was online, it was pirated by other “news sites.” A few words were changed, but ultimately, it was Randy’s story with someone else’s byline. The sites appeared to be engines to get people to click on to an interesting story and then bombard them with malware and spam. I can only assume Artificial Intelligence is used for poaching the stories that appear on these sites.
About six months ago, I was invited to try Bard, a conversational generative artificial intelligence chatbot developed by Google. I had a lot of fun asking it questions, and seeing what answers it would come up with.
I asked Bard to write a story about the Biddeford parking garage, which many of you know has been a controversial topic. Bard quickly responded with a (fictional) story about a woman named Sarah, who was new to Biddeford. Sarah was overjoyed to discover the parking garage. She was so pleased that she sent a letter to the City Council, and the Council contemplated building a second garage.
I sent this story to Randy, who has written numerous stories and columns about the parking garage. His response, “I almost spit out my coffee.”
If you read the Frequently Asked Questions about Bard, Google does offer this disclaimer:
“Bard is experimental, and some of the responses may be inaccurate, so double-check information in Bard’s responses. With your feedback, Bard is getting better every day. Before Bard launched publicly, thousands of testers were involved to provide feedback to help Bard improve its quality, safety, and accuracy.”
While Bard and other AI Chatbots have their limitations, many have found them to be useful tools to assist them in research, compiling data and other tasks. Artificial Intelligence engines can supplement our work, but they cannot replace humans.
But I knew that all along. I learned from a 1981 made-for-television movie that robots can never replace humans.
In the equally precious and awful movie “The Harlem Globetrotters on Gilligan’s Island,” the Harlem Globetrotters must beat a team called The New Invincibles in a game at basketball in order to save the island from an evil scientist. When the coach sees the robots whiz across the court and toss the ball to each other in rapid-fire precision, he panics and tells the team, “Forget all this fancy stuff and play some hard straight-up basketball.”
The team does just that, and at half-time, the robots have an embarrassingly high lead. Fortunately, the guileless Gilligan says something to the professor that gives him an idea. The Harlem Globetrotters should use their tricks and hijinks on the court, because the robots won’t be able to interpret them. “They can only do what they were programmed to do, they can’t think,” says the professor.
The professor talks with the team, and the coach agrees. “Use your incredible magic and wizardry,” he tells the team. And it works. While the Harlem Globetrotters perform their antics with flair on the court, the robots perform like robots- stiff and unable to react to the nuances of human behavior.
We humans possess the real magic and wizardry – not robots, or Artificial Intelligence. A robot may spit out some data, but if you want to get the real story, read something written by a person.
This column was written by Saco Bay News Publisher Liz Gotthelf, and not a robot. If you would like to contact her, email newsdesk@sacobaynews.com.