Elon Musk saves mankind from evil AI
Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun which aims to ensure that society can reap the benefits of AI while avoiding potential pitfalls. The Boston-based Future of Life Institute (FLI) has announced the selection of 37 research teams around the world to which it plans to award about $7m from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant programme dedicated to 'keeping AI robust and beneficial'.
The programme launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics and other fields relevant to coming advances in AI. The 37 projects being funded include:
- Three projects developing techniques for AI systems to learn what humans prefer from observing our behaviour, including projects at UC Berkeley and Oxford University;
- A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values;
- A project led by Manuela Veloso from Carnegie-Mellon University on making AI systems explain their decisions to humans;
- A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial;
- A project headed by Heather Roff studying how to keep AI-driven weapons under 'meaningful human control'; and
- A new Oxford-Cambridge research centre for studying AI-relevant policy.
Jaan Tallinn, Founder, FLI and Skype, has described this new research direction: "Building advanced AI is like launching a rocket. The first challenge is to maximise acceleration, but once it starts picking up speed, you also need to focus on steering."
When the FLI issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing this widespread agreement that moved Elon Musk to seed the research programme that has now begun.
"Here are all these leading AI researchers saying that AI safety is important," said Musk at the time. "I agree with them, so I'm today committing $10m to support research aimed at keeping AI beneficial for humanity.”
“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, Past-President, Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants
“This grant programme was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programmes,” added Prof. Francesca Rossi, President, International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.
Tom Dietterich, President, AAAI, described how his grant, a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation, breaks the mold of traditional research: “In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research used probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns -- aspects that the designers of the AI system have not anticipated at all?"
As Terminator Genisys debuted recently, organisers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said Max Tegmark, President, FLI. “We're staying focused and the 37 teams supported by today’s grants should help solve such real issues.”
The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4m of the Musk-backed programme on the areas that emerge as most promising.
FLI has a mission to catalyse and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.