Second & Flume: Skynet

In the age of AI, we can’t even believe our own eyes

Melissa Daugherty

A futurist I am not.

I was in middle school when Back to the Future Part 2 came out in 1989, and I specifically remember a scene in which Marty McFly, who’d transported to the year 2015, sees his future self talk in real time to the person on the other end of a call on what I’d then have described as a television phone.

“That’s so cool, but it won’t happen in my lifetime.”

I said those words with utter confidence.

Little did I know that “video telephony” had been in the works for nearly the previous half-century. This was a few years before everyone and their sister had an AOL account or had even heard the term World Wide Web.

While I had a pretty vivid imagination as a kid, I was always a skeptic. Scoping out situations and especially people and their motives is something that has protected me countless times and no doubt aided me as a reporter.

While I was dead wrong about the aforementioned advance in communications, and I’ve certainly learned my lesson, I largely remain and encourage others to be skeptical. In fact, based on today’s emerging technologies, I’d say having a questioning mind is more important than ever.

Of utmost concern to this journalist is non-human intelligence—one of the many existential issues facing humanity—which is being used at a much faster pace than I’d anticipated.

First it was the deep fakes. Now it’s ChatGPT and other AI apps designed to seemingly write like a human. These technologies are more than a bit terrifying, considering how some people will believe anything.

Take, for example, the attempted coup on Jan. 6, 2021. The thousands of brainwashed, impressionable Americans who stormed the Capitol that day were convinced to do so by a small cadre of radicals, including the then-president, who spread the lie about the election being stolen.

Let’s also not forget Pizzagate back in 2016: That’s the conspiracy theory accusing certain high-ranking Democrats of being sex-trafficking Satanists who drank the blood of children in the basement of a Washington, D.C., pizza parlor. Then, of course, hitting closer to home, back in 2018, was Rep. Marjorie Taylor Greene’s claim that Jewish space lasers burned down the town of Paradise.

To the average person, those ridiculous—and in MTG’s case anti-Semitic—conspiracy theories are immediately dismissable. But to the QAnon crew and other folks who tend to place stock in such whackadoodle scenarios, they are often a call to arms.

My point? What happens when AI is co-opted by better organized nefarious forces in an effort to deceive the masses? It might be a lot easier than one would think in a Republic that at times feels like it’s held together by a bare thread.

In the near term, humans using AI to get people to turn on each other seems like a much more plausible danger to society than—bear with me on this one—the tech going completely off the rails and attempting to exterminate humans a la Skynet in Terminator II. Although, I’m not sure I’d blame the robots when they realize the biggest threat to Earth—climate change anyone?—and therefore their own existence is mankind.

In all seriousness, I do realize that AI can also be used for good. It can quickly analyze data for improved medical treatments, for example. But it needs to be regulated and everyone should be extremely cautious using it.

Indeed, be skeptical of everything online—whether you read it, hear it or see it—and learn how to property fact-check. Because AI can be used to do a lot of harm, and it’s not going away.

About Melissa Daugherty 80 Articles
Melissa Daugherty is an award-winning columnist and editorial writer who started her career as a higher education reporter at a daily newspaper. Daugherty spent 17 years at the CN&R, eight of them as editor-in-chief. Comforting the afflicted and afflicting the comfortable is her super power.

Be the first to comment

Leave a Reply

Your email address will not be published.


*