[4:19]

Stuart Russell’s Three Key Principles to Create Safer AI

Funding for the AI and robotic startup sector has surged in the past several years. But this comes as no surprise as we are in just the beginning of the AI revolution. AI startups create enormous buzz among investors, who see the tremendous potential of the industry in the long run.

According to 2017’s CB Insights annual AI 100 list, which the company has unveiled at A-Ha! Conference, the noted companies on the list have raised $11.7 Billion USD in 367 deals.

According to PwC’s Global Artificial Intelligence Study: Exploiting the AI Revolution, the estimated contribution of AI companies to the global economy by 2030 will be $15.7 trillion USD.

More and more entrepreneurs focus on AI, developing innovative products. But the discussion on the topic raises many questions, some of which are very skeptical. Is AI going to take over humanity? How many people will lose their jobs? What are the potential risks?

“Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as species, feel greatly humbled.” Alan Turing, 1951

“We had better be quite sure that the purpose we put into the machine is the purpose which we really desire” Norbert Wiener, 1960

The entrepreneurs need to focus not only on creating AI, but on creating safer AI. In his TEDx Talk, AI pioneer Stuart Russell gives three key principles for creating safer human-compatible AI, which are as follows:

  1. The robot’s only objective is to maximize the realization of human values.
  2. The robot is initially uncertain about what those values are.
  3. Human behavior provides information about human values.

 

Leave A Comment

Your email address will not be published. Required fields are marked *

back to top
Shares