But AI is already here: It’s powering your voice-activated digital personal assistants and Web searches, guiding automated features on your car and translating foreign texts, detecting your friends in photos you post on social media and filtering your spam.
But as practical uses of AI have exploded in recent years, one critical element remains missing: an industrywide set of ethics standards or best practices to guide the growing field.
Now, the industry heavyweights are partnering to fill that gap. Called the Partnership on Artificial Intelligence to Benefit People and Society, the group consists of Amazon, Facebook, Google, Microsoft and IBM. Apple is also in talks to join.
“We’ve been talking about this for many years — informally,” says IBM’s Vice President of Cognitive Computing Guruduth Banavar. “Finally we have this opportunity to formalize (the conversation).”
The group’s goal is to create the first industry-led consortium that would also include academic and nonprofit researchers, leading the effort to essentially ensure AI’s trustworthiness: driving research toward technologies that are ethical, secure and reliable — that help rather than hurt — while also helping to diffuse fears and misperceptions about it.
“We plan to discuss, we plan to publish, we plan to also potentially sponsor some research projects that dive into specific issues,” Banavar says, “but foremost, this is a platform for open discussion across industry.”
In a way, the extent to which AI took off over the past few years snuck up on many of us. AI scientists have long predicted the surge, but its timing was a moving target. Now, machines are besting humans at translation and texting. Competition is growing among voice-activated assistants: Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana. IBM’s Watson supercomputer is writing recipes and helping doctors treat cancer. Google’s DeepMind defeated a human at …