The rhetoric in an open letter from the Future of Life Institute think tank, writes David Olive, is at times alarmist: “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?”The rhetoric in an open letter from the Future of Life Institute think tank, writes David Olive, is at times alarmist: “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?”

Tech leaders like Elon Musk want a slowdown in AI advances. Is the world needlessly panicking?

We definitely need regulatory guardrails around artificial intelligence, writes David Olive, but we still have a chance to shape AI before it shapes us.

How concerned should we be about artificial intelligence, one of the most advanced forms of computing?

We should insist that some regulatory guardrails be put around it, as we failed to do in the early days of the internet and social media.

But there’s no reason to panic about AI, a state of alarm many AI experts have recently come close to reaching.

Artificial intelligence is, of course, already in widespread application.

Automakers use it in their driver assistance features. Alphabet Inc.’s Google and Microsoft Corp. are adding AI features to their online search and productivity tools, including Word, Outlook and Excel. If you are online, Microsoft uploads those AI features to your computing devices automatically.

And this summer, Air Canada will start experimenting with an AI-powered system that more quickly alerts travellers to delayed and cancelled flights.

The potential ubiquity of AI was highlighted last month when OpenAI, biggest of the new AI firms and backed by Microsoft, unveiled the ChatGPT-4 chatbot, an update of the ChatGPT app that gained enormous popularity when it was launched in November.

With its ability to rapidly gather data from the four corners of the internet and present it in text and speech that simulates human communication, ChatGPT-4 and its rival apps could soon be as commonplace as the cellphone.

OpenAI will first have to change the clunky name of its natural-language programming app. It stands for “generative pre-trained transformer.” That means it gathers, or generates, data from the online world and turns, or transforms, it from text into speech and vice versa.

And “training” is the ability of ChatGPT-4 to constantly expand its capabilities on its own.

Its champions make big claims for AI. It can, they say, reverse the decade-long slump in global productivity. And it can reduce inflation by cutting costs through efficiency gains.

Goldman Sachs Group Inc., the investment bank, recently calculated that global corporate spending on AI will reach $2.5 trillion (U.S.) by 2030.

The AI sector has recently become one of hottest investment fields.

Last month alone, about $105 million (U.S.) poured into AI and robotics exchange traded funds (ETFs), while money flowed out of ETFs focused on electric vehicles (EVs), cloud computing, and clean energy.

So, why the near panic over AI?

In an open letter last month, the Future of Life Institute (FLI), a U.S. think tank largely funded by Elon Musk, called for a six-month moratorium on AI advances more powerful than ChatGPT-4.

At this writing, the FLI’s petition boasts more than 5,000 signatories and the FLI says it has another 50,000 or so it has yet to post. The list reads like a who’s who of AI engineers, researchers and business executives.

The signatories include Musk; Apple Corp. co-founder Steve Wozniak; employees of Microsoft and Google, which are engaged in what the petition describes as an “out-of-control” arms race to develop new AI applications; and AI academic pioneers Stuart Russel of Berkeley and Yoshua Bengio of the Université de Montréal.

It’s doubtful that all the signatories buy into the petition’s alarmist rhetoric. To wit, “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?”

It is likely, though, that the petitioners see a chance for society to gain control over early-stage AI and not repeat the mistake of failing to do so with the internet and social media.

In a Montreal press conference to reinforce the petition, Bengio expressed his concern that AI could have “negative uses, and that society is not ready for that.”

Those risks include AI’s lack of protections against use by rogue regimes, terrorists, ransomware attackers and cyber criminals.

Across town at the Montreal-based AI think tank Mila, executive director Benjamin Prud’homme is concerned that AI, in tapping into online information, would inflict users with the internet’s abundance of disinformation.

The petition is not alone in calling for a new regulatory apparatus to ensure that society’s best interests are served by AI.

The U.S. Congress is examining AI’s potential hazards, and Canada and the U.K. have taken preliminary steps in setting ethical standards for it.

We still have a chance to shape AI before it shapes us.

It’s almost quaint to recall that the earliest mantra of the internet pioneers was that “information wants to be free.”

The internet has since become one of the most profit-driven enterprises on the planet. And it has served Big Tech’s bottom line to make only half-hearted efforts at self-policing internet content.

There’s little sign yet of AI following a more noble path.

Esteemed computer scientist Margaret Mitchell was fired in 2021 as co-head of ethics at Google’s AI unit after she copublished a paper criticizing biases in the industry’s natural-languages apps, including those of Google.

For now, at least, the operating principle of the AI moguls appears to be borrowed from Bertolt Brecht: “Grub first, then ethics.”

David Olive is a Toronto-based business columnist for the Star. Follow him on Twitter: @TheGrtRecess
JOIN THE CONVERSATION

Conversations are opinions of our readers and are subject to the Code of Conduct. The Star does not endorse these opinions.

More from The Star & Partners

More Business

Top Stories