It’s not easy to craft a narrative that touches on hundreds (if not thousands) of platforms and services, but Google made an effort to bring privacy to the fore this week — particularly in the area of artificial intelligence and machine learning.
It detailed its work in federated learning, a distributed AI approach that facilitates model training by aggregating samples that are sent to the cloud for processing only after they’ve been anonymized and encrypted. (Google says that its Gboard keyboard for Android and iOS is already using federated learning to improve next-word and emoji prediction across “tens of millions” of devices.) It dovetails, the company said, with recent privacy-focused improvements in its open source machine learning framework, TensorFlow.
In March during the TensorFlow Developer Summit, Google announced TensorFlow Federated (TFF), a module that facilitates the deployment and training of AI systems using data from multiple local, separate devices. During that same conference, Google debuted TensorFlow Privacy, a library for its TensorFlow machine learning framework intended to make it easier to train AI models with strong privacy guarantees.
Separately, on the second day of I/O, Google published a list of privacy commitments regarding its hardware products in which it detailed how personal data is used, and how it can be controlled. The document notes, for instance, that the new camera-touting Nest Hub Max, which leverages an on-device facial recognition feature dubbed Face Match to spot familiar people and surface contextually relevant information, doesn’t send facial recognition data to the cloud.
Google also unveiled this week an improved Google Assistant that can perform tasks more quickly and that doesn’t require repeated triggering with a hotword (e.g., “Hey Google”). The company said that because the speech recognition model is far smaller than that of the current version — half a gigabyte now compared with roughly 100 gigabytes — it’s able to complete tasks like transcription, file searches, and selfie-snapping offline, without an internet connection.
“On-device machine learning powers everything from these incredible breakthroughs like Live Captions to helpful everyday features like Smart Reply,” explained Google’s senior director of Android Stephanie Cuthbertson. “And it does this with no user input ever leaving the phone, all of which protects user privacy.”
These, of course, weren’t the only privacy-related announcements Google made onstage and in the weeks leading up to I/O. It rolled out a setting that will let users delete location data automatically. It revealed that Chrome will implement policies that make cookies more private, alongside anti-fingerprinting tech that will prevent ad networks from tracking users’ behavior without their consent. And it teased Incognito Mode for Google Maps, a setting that, when enabled, won’t associate places users have searched for and navigated to with their accounts.
Google’s recommitment to privacy comes as skepticism toward the tech industry’s handling of personal data reaches an all-time high. About 91% of Americans say they’ve lost control over how their personal information is collected and used, according to the Pew Research Center, and 89% of people think companies should be more transparent about how their products use data.
The cynicism isn’t all that surprising, given recent events like the Facebook and Cambridge Analytica data scandal. And Google is not immune: A Wall Street Journal report last summer revealed that Google+, Google’s now-shuttered social network, failed to disclose an exploit that might have exposed the data of more than 500,000 users.
With this week’s announcements, Google is betting that a privacy-forward approach to AI — and to its products and services more broadly — will keep it in the good graces of its billions-strong userbase. Time will tell.
Thanks for reading,
AI Staff Writer