Google’s AI looks beneath the surface for information about people, places, and things in images

Google today announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting this week in the U.S., users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google says the new feature, which will start to appear on some photos within Google Images before expanding to more languages and surfaces over time, is intended to provide context around both images and the webpages hosting them. It’s estimated that images currently make up 12.4% of search queries on Google, and at least a portion of these are irrelevant or manipulated. In an effort to address this, Google earlier this year began identifying misleading photos in Google Images with a fact-check label, expanding the function beyond its standard non-image searches and video.

It should be noted that while the topics are curated in the sense that they’re sourced from the Knowledge Graph, this doesn’t preclude the potential for classification errors. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as “gorillas.” Three years later, Google hadn’t moved beyond a piecemeal fix, simply blocking image category searches for “gorilla,” “chimp,” “chimpanzee,” and “monkey” rather than reengineering the algorithm. More recently, researchers showed that Google Cloud Vision, Google’s computer vision service, automatically labeled an image of a dark-skinned person holding a thermometer “gun” while labeling a similar image with a light-skinned person “electronic device.” In response, Google says it adjusted the confidence scores to more accurately return labels when a firearm is in a photo.

We’ve reached out to Google for information about what safeguards — if any — are in place, and we’ll update this article once we hear back.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Google Images AI

Tapping on images will reveal a list of related topics, such as the name of a pictured river or which city the river is in. Selecting one of those topics will show a short description of the person or thing it references, along with a link to learn more and subtopics to explore.

Google says these links are generated by taking what’s known about images through AI and evaluating visual and text signals before combining them with an understanding of the text on the images’ webpages. This information helps to determine the most likely people, places, or things relevant to a specific image and match this with existing topics in the Knowledge Graph, which are surfaced in Google Images when there’s a high likelihood of a match.

Google Images AI

“In recent years, we’ve made Google Images more useful by helping you explore beyond the image itself. For example, there are captions on thumbnail images in search results, Google Lens lets you search within images you find, and you can explore similar ideas with the Related Images feature,” Google software engineer Angela Wu wrote in a blog post. “All of these improvements have the common goal of making it easier to find visual inspiration, learn new things, and get more done.”

Please follow and like us: