Opinion: AI alarm bells should ring loudly now

File photo/Chloe Ellingson/The New York Times / Dr. Geoffrey Hinton, an artificial intelligence pioneer, is shown at his home in Toronto on Monday, April 24, 2023. Hinton left Google so that he can freely share his concern that AI could cause the world serious harm.

The AI news got my attention when Dr. Geoffrey Hinton resigned from Google. The "Godfather of AI" wanted to speak publicly about the potential dangers of artificial intelligence.

"It is hard to see how you can prevent the bad actors from using it for bad things," he said. Really? I thought that AI's downside was creating essays for less-than-motivated students and driving teachers crazy.

What was I missing? Apparently a lot. I'm now getting emails informing me how AI is going to influence medical care. And how it's affecting veterinarians. Then there are the offers of how I can "create" a song in the style of famous musicians in just seconds. I decided to try out the Microsoft-backed OpenAI's ChatGPT to see how an AI chatbot worked. I asked it to write an article about myself. The results was fairly thorough, but not entirely accurate, given the large number of individuals with the same name as mine.

It was obvious that I needed to learn more about AI. I searched for information online. That was eye-opening. Amazingly, AI learning is everywhere ... including university degrees, graduate certificates, and online courses. The AI environment is only at its early stages of development. There will soon be a vast number of AI experts designing new applications for for the technology that we can't even imagine yet.

Worries about artificial intelligence systems outsmarting humans intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. No wonder 1,000 current AI researchers recently called for a six-month pause on the training of new, powerful AI systems. Their concern for the risks to society and humanity are legitimate.

Now Congress is looking into a variety of legislative actions about regulating AI. My first reaction was: Why is Congress even in this space? After all, how many of our elected representatives even know the basics of AI? My skepticism changed after hearing about Congress' potential ban of AI from nuclear facilities.

I never connected AI to anything nuclear. Maybe I should have paid closer attention to the warnings of executives at Microsoft and Google: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Even more scary was a recent headlined: "Artificial Intelligence Threatens Extinction, Experts Say In New Warning." I thought that extinction of the human race through AI was a bad science fiction movie, not a realistic threat.

Other countries are scrambling to come up with AI regulations: Australia, China, France, Italy, Japan and Spain. The European Union plans to approve its AI Act later this year. I no longer question the appropriateness of congressional involvement.

But where are we in this process? According to Reuters, we are "seeking input on regulations." That refers to the Biden's administration saying it wants public comments on potential accountability for AI systems. It also refers to legislation introduced recently that would create an AI task force assigned to identify best ways to reduce the threats to "privacy, civil liberties and due process."

Does any of that sound fast-moving to you? I didn't think so. We need Congress to do something — and soon! It's way beyond time for a major revision of our thinking about AI. Folks, we're just seeing the beginning of a shift unlike anything humanity has encountered. Call your elected officials about AI and sound a warning bell for urgency.

Contact Deborah Levine, an author, trainer/coach and editor of the American Diversity Report, at deborah@diversityreport.com.