Google Image Search Will Now Show a Photo’s History. Can It Spot Fakes?

The new “About this image” feature will help you discern whether a photo search result can be trusted. But it’s not a surefire safeguard against misinformation.
Paint dripping over a bunch of bananas with a glitch effect applied
Photograph: Yagi Studio/Getty Images

The spread of misinformation is a massive problem online, and generative AI is only helping boost the creation of inauthentic or real-but-repurposed media. Even in the pre-generative-AI era, an image surfaced through a quick Google search might have been used out of context or attached to a less-than-reliable website.

Google believes it has at least one solution for this problem. In Google image search results, users will start seeing an information box called “About this image.” It rolls out today in the US (and initially only in English). This follows the launch of “About this result” in 2021, which provides additional information around the source of a Google search result, and “About this author” in early 2023, which offers context around the author of a page.

The new image tool is supposed to give context around three specific areas: When the image (or similar ones) were first indexed by Google, which website it may have first appeared on, and where else it has appeared online, such as on social media. Google also says it plans to indicate if a photo has been fact-checked before.

Video: Google

“We’re looking to invest in information literacy practices, to help people assess the reliability of a specific image,” says Nidhi Hebbar, Google’s product management lead for information literacy, “rather than just simply searching for images online.”

Google first talked about “About this image” during its developers conference in May, saying that the company’s information literacy team was building tools to help internet users spot misinformation and better understand the origins of photos indexed in Google Search. And while Google has avoided saying that this feature is an explicit response to the AI-generated imagery flooding the internet, that’s certainly a part of it: The company’s own experimental Search Generative Experience will showcase “About this image” across all images. All generated images on SGE will also be watermarked, similar to how all of Microsoft’s Bing AI-generated images now feature an invisible digital watermark.

Ahead of the rollout, Hebbar showed WIRED how “About this image” will work. She pulled up Google images of the Krzywy Domek—the Crooked House—in Sopot, Poland. She then navigated to one of the top image results, hosted on Wikipedia, and clicked on a three-dot menu icon that offered “About this image” as an option. The tool indicated that there’s a version of this image that is at least 10 years old. Two of the top website results for the image (aside from Wikipedia) were Atlas Obscura and the Huffington Post, suggesting some legitimacy. Even though the Krzywy Domek is so architecturally bizarre that it looks fake, Google’s “About this image” gives a strong suggestion that it’s real.

The contextual information provided in that one example of a Google image search result was somewhat sparse. In other instances, Google might offer metadata as well—when, where, and how the photo was captured. It plans to do this on its own generative search engine to start, Hebbar says. But that’s nearly impossible to accomplish across the trillions of images that appear in Google search results. And whether a Google search image even contains that metadata is largely dependent on whether the original creator or publisher who produced the image opted to include that information in the file.

Google has stressed that the metadata field in “About this image” is not going to be a surefire way to see the origins, or provenance, of an image. It’s mostly designed to give more context or alert the casual internet user if an image is much older than it appears—suggesting it might now be repurposed—or if it’s been flagged as problematic on the internet before.

Provenance, inference, watermarking, and media literacy: These are just some of the words and phrases used by the research teams who are now tasked with identifying computer-generated imagery as it exponentially multiplies. But all of these tools are in some ways fallible, and most entities—including Google—acknowledge that spotting fake content will likely have to be a multi-pronged approach.

WIRED’s Kate Knibbs recently reported on watermarking, digitally stamping online texts and photos so their origins can be traced, as one of the more promising strategies; so promising that OpenAI, Alphabet, Meta, Amazon, and Google’s DeepMind are all developing watermarking technology. Knibbs also reported on how easily groups of researchers were able to “wash out” certain types of watermarks from online images.

Reality Defender, a New York startup that sells its deepfake detector tech to government agencies, banks, and tech and media companies, believes that it’s nearly impossible to know the “ground truth” of AI imagery. Ben Colman, the firm’s cofounder and chief executive, says that establishing provenance is complicated because it requires buy-in, from every manufacturer selling an image-making machine, around a specific set of standards. He also believes that watermarking may be part of an AI-spotting toolkit, but it’s “not the strongest tool in the toolkit.”

Reality Defender is focused instead on inference—essentially, using more AI to spot AI. Its system scans text, imagery, or video assets and gives a 1-to-99 percent probability of whether the asset is manipulated in some way.

“At the highest level we disagree with any requirement that puts the onus on the consumer to tell real from fake,” says Colman. “With the advancements in AI and just fraud in general, even the PhDs in our room cannot tell the difference between real and fake at the pixel level.”

To that point, Google’s “About this image” will exist under the assumption that most internet users aside from researchers and journalists will want to know more about this image—and that the context provided will help tip the person off if something’s amiss. Google is also, of note, the entity that in recent years pioneered the transformer architecture that comprises the T in ChatGPT; the creator of a generative AI tool called Bard; the maker of tools like Magic Eraser and Magic Memory that alter images and distort reality. It’s Google’s generative AI world, and most of us are just trying to spot our way through it.