Google CEO on AI regulation: ‘There must be consequences’

by Ana Lopez

The capabilities of artificial intelligence – and the speed at which the technology is being released to the public – are driving a mix of reactions from tech enthusiasts, CEOs and pundits.

For Google CEO Sundar Pichai, AI is an increasingly important aspect of Google’s business. The company released its AI chatbot Bard in February and has other projects on the horizon, such as a prototype called “Project Starlink”, which aims to improve video conferencing. by simulating a more lifelike experience.

In an interview with “60 Minutes” on Sunday, Pichai said that AI is one of the most important discoveries of our time.

“I’ve always considered AI to be the most profound technology humanity is working on — deeper than fire or electricity,” Pichai said in the interview. “We are developing technology that will be far more capable than anything we’ve ever seen before.”

Pinchai told the program that there should be government regulation of AI, especially with the rise of deep fakes, saying the approach to the technology would be “no different” from the company’s approach to spam and Gmail.

Related: We Asked Google’s AI Bard How to Start a Business. Here’s what it said.

“We are constantly developing better algorithms to detect spam,” Pichai said. “We should do the same with deep fakes, audio and video. Over time, there should be regulation. There should be consequences for making deep fake videos that harm society.”

In March, in a open letter signed by technology leaders (notably Elon Musk and Apple co-founder Steve Wozniak) and CEOs calling for a six-month pause in AI development to manage and assess potential risks. To date, the letter has more than 26,000 signatures.

Related: Bill Gates disagrees with the move to pause AI development – here’s why

Related Posts