Skip to main content

Google’s AI looks beneath the surface for information about people, places, and things in images

Image Credit: Reuters

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Google today announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting this week in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google says the new feature, which will start to appear on some photos within Google Images before expanding to more languages and surfaces over time, is intended to provide context around both images and the webpages hosting them. SEO company Moz estimates that images currently make up 12.4% of search queries on Google, and at least a portion of these are irrelevant or manipulated. In an effort to address this, Google earlier this year began identifying misleading photos in Google Images with a fact-check label, expanding the function beyond its standard non-image searches and video.

While the topics are curated in the sense that they’re sourced from the Knowledge Graph, this doesn’t preclude the potential for classification errors. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as “gorillas.” Three years later, Google hadn’t moved beyond a piecemeal fix that simply blocked image category searches for “gorilla,” “chimp,” “chimpanzee,” and “monkey” rather than reengineering the algorithm. More recently, researchers showed that Google Cloud Vision, Google’s computer vision service, automatically labeled an image of a dark-skinned person holding a thermometer “gun” while labeling a similar image with a light-skinned person “electronic device.” In response, Google says it adjusted the confidence scores to more accurately return labels when a firearm is in a photo.

A Google spokesperson told VentureBeat via email that preventing failures of detection and labeling was a “core focus” from the very beginning of the project. The company put the feature through a human evaluation process to identify if there were any “offensive” or “upsetting” examples, and developed test cases on sensitive query sets to help with stress testing. Google also claims it’s using quality thresholds for what images can appear in highlighted features; if Google Images detects a certain query is looking for sensitive content, it automatically detects the intent and prevents Knowledge Graph content from appearing.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Google Images AI

Tapping on images will reveal a list of related topics, such as the name of a pictured river or which city the river is in. Selecting one of those topics will show a short description of the person or thing it references, along with a link to learn more and subtopics to explore.

Google says these links are generated by taking what’s known about images through AI and evaluating visual and text signals (including other search queries) before combining them with an understanding of the text on the images’ webpages. This information helps to determine the most likely people, places, or things relevant to a specific image and match this with existing topics in the Knowledge Graph, which are surfaced in Google Images when there’s a high likelihood of a match.

Google Images AI

“In recent years, we’ve made Google Images more useful by helping you explore beyond the image itself. For example, there are captions on thumbnail images in search results, Google Lens lets you search within images you find, and you can explore similar ideas with the Related Images feature,” Google software engineer Angela Wu wrote in a blog post. “All of these improvements have the common goal of making it easier to find visual inspiration, learn new things, and get more done.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.