GREENTECH

Google gathers an external panel to consider AI challenges

Google today announced that it’s formed an external advisory group — the Advanced Technology External Advisory Council (ATEAC) — tasked with “consider[ing] some of the most complex challenges [in AI],” including facial recognition and fairness in machine learning. It comes roughly a year after the Mountain View company published a charter to guide its use and development of AI, and months after Google said it would refrain from offering general-purpose facial recognition APIs before lingering policy questions are addressed.

ATEAC — whose eight-member panel of academics, policy experts, and executives includes Luciano Floridi, a philosopher and expert in digital ethics at the University of Oxford; former U.S. deputy secretary of state William Joseph Burns; Trumbull CEO Dyan Gibbens; and Heinz College professor of information technology and public policy Alessandro Acquisti, among others — will serve over the course of 2019, and hold four meetings starting in April. They’ll be encouraged to share “generalizable learnings” that arise from discussions, Google says, and a summary report of their findings will be published by the end of the year.

“We recognize that responsible development of AI is a broad area with many stakeholders, [and we] hope this effort will inform both our own work and the broader technology sector,” wrote Google’s senior vice president of global affairs Kent Walker in a blog post. “In addition to consulting with the experts on ATEAC, we’ll continue to exchange ideas and gather feedback from partners and organizations around the world.”

Google first unveiled its seven guiding AI Principles in June, which hypothetically preclude the company from pursuing projects that (1) aren’t socially beneficial, (2) create or reinforce bias, (3) aren’t built and tested for safety, (4) aren’t “accountable” to people, (5) don’t incorporate privacy design principles, (6) don’t uphold scientific standards, and (7) aren’t made available for uses that accord with all principles. And in September, it said that a formal review structure to assess new “projects, products and deals” had been established, under which more than 100 reviews had been completed.

Google’s long had an AI ethics review team consisting of researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, legal experts, and social scientists who handle initial assessments and “day-to-day operations,” and a second group of “senior experts” from a “range of disciplines” across Alphabet — Google’s parent company — who provide technological, functional, and application expertise. Another council — one made of senior executives — navigates more “complex and difficult issues,” including decisions that affect Google’s technologies.

But those groups are internal, and Google’s faced a cacophony of criticism over its recent business decisions involving AI-driven products and research.

Reports emerged that this summer that it contributed TensorFlow, its open source AI framework, while under a Pentagon contract — Project Maven — that sought to implement object recognition in military drones. Google reportedly also planned to build a surveillance system that would’ve allowed Defense Department analysts and contractors to “click on” buildings, vehicles, people, large crowds, and landmarks and “see everything associated with [them].”

Project Maven prompted dozens of employees to resign and more than 4,000 others to sign an open opposition letter.

Other, smaller gaffes include failing to include both feminine and masculine translations for some languages in Google Translate, Google’s freely available language translation tool, and deploying a biased image classifier in Google Photos that mistakenly labeled a black couple as “gorillas.”

To be fair, Google isn’t the only company that’s received criticism for controversial applications of AI.

This summer, Amazon seeded Rekognition, a cloud-based image analysis technology available through its Amazon Web Services division, to law enforcement in Orlando, Florida and the Washington County, Oregon Sheriff’s Office. In a test — the accuracy of which Amazon disputes — the American Civil Liberties Union demonstrated that Rekognition, when fed 25,000 mugshots from a “public source” and tasked with comparing them to official photos of Congressional members, misidentified 28 as criminals.

And in September, a report in The Intercept revealed that IBM worked with the New York City Police Department to develop a system that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Using “thousands” of photographs from roughly 50 cameras provided by the NYPD, its AI learned to identify clothing color and other bodily characteristics.

But today’s announcement — which perhaps uncoincidentally comes a day after Amazon said it would earmark $10 million with the National Science Foundation for AI fairness research — appears to be an attempt by Google to fend off broader, continued criticism of private sector AI pursuits.

“Thoughtful decisions require careful and nuanced consideration of how the AI principles … should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance,” Walker said in an earlier blog post.

Please follow and like us:
Verified by MonsterInsights