When you ask AI to look in the mirror, it doesn't always see itself. That's the sense you get when you ask it to determine whether an image is genuine or AI-generated.
Google last week took a stab at helping us distinguish real from deepfake, albeit an extremely limited one. In the Gemini app, you can share an image and ask if it's real, and Gemini will check for a SynthID -- a digital watermark -- to tell you whether or not it was made by Google's AI tools. (On the other hand, Google last week also rolled out Nano Banana Pro, its new image model, which makes it even harder to spot a fake with the naked eye.)
Within this limited scope, Google's reality check functions pretty well. Gemini works quickly and will tell you if something was made by Google's AI. In my testing, it even worked on a screenshot of an image. And the answer is quick and to the point -- yes, this image, or more than half of it at least, is fake.
But ask it about an image made by literally any other image generator and you won't get that smoking gun answer. What you get is a review of the evidence: The model looks for all of the typical tells of something being artificial. In this case, it's basically doing what we do with our own eyes, but we still can't totally trust its results.
As reliable and necessary as Google's SynthID check is, asking a chatbot to evaluate something that lacks a watermark is almost worthless. Google has provided a useful tool for checking the provenance of an image, but if we're going to be able to trust our own eyes on the internet again, every AI interface we use should be able to check images from every kind of AI model.
I hope that soon we'll be able to just drop an image into, say, Google Search and find out if it's fake. The deepfakes are getting too good not to have that reality check.
Checking images with chatbots is a mixed bag
There's very little to say about Google's SynthID check. When you ask Gemini (on the app) to evaluate a Google-generated image, it knows what it's looking at. It works. I'd like to see it rolled out across all the places Gemini appears -- like the browser version and Google Search -- and according to Google's blog post on the feature, that's already in the works.
The fact that Gemini in the browser doesn't have this functionality yet means we can see how the model (without SynthID) itself responds when asked if an AI-generated image is real. I asked the browser version of Gemini to evaluate an infographic Google provided reporters as a handout showing its new Nano Banana Pro model in action. This was AI-generated -- and even said so in its metadata. Gemini in the app used SynthID to suss it out. Gemini in the browser was wishy-washy: It said the design could be from AI or a human designer. It even said its SynthID tool didn't find anything indicating AI. (Although when I asked it to try again, it said it encountered an error with the tool.) The bottom line? It couldn't tell.
What about other chatbots? I had Nano Banana Pro generate an image of a tuxedo cat lying on a Monopoly board. The image, at a glance, was plausibly realistic. Unsuspecting coworkers I sent it to thought it was my cat. But if you look more closely, you'll see the errors: For example, the Monopoly set makes no sense -- Park Place is in multiple wrong places and the colors are off.
This is not a real cat or a real Monopoly game board. The image was generated by Google's Nano Banana Pro AI image model.
I asked a variety of AI chatbots and models if the image was AI-generated and the answers were all over the place.
Gemini on my phone figured it out instantly using the SynthID checker. Gemini 3, the higher-level reasoning model released this week, offered a detailed analysis showing why it was AI-generated. Gemini 2.5 Flash (the default model you get by picking "Fast") guessed it was a real photograph based on the level of detail and realism. I tried ChatGPT twice on two different days and it gave me two different answers, one with an extensive explanation of how it's obviously real, and another with an equally long dissertation on why it's a fake. Claude, using the Haiku 4.5 and Sonnet 4.5 models, said it looked real.
When I tested images generated by non-Google AI tools, chatbots made their assessments based on the quality of the generation. Images with more obvious tells -- for instance, mismatched lighting and poorly rendered text -- were more reliably spotted as AI. But the theme was inconsistency. Really, it wasn't any more accurate than just giving it a deep, critical look with my own eyes. That's not good enough.
The future of AI detection
Google's newest tool charts one potential path forward, even if it only goes so far. Yes, one solution to the growing problem of deepfakes is having the ability to check an image in a chatbot app. But it needs to work for more images and more apps.
It should not require special knowledge to spot a fake. You shouldn't have to find a bespoke app, parse metadata or know offhand what errors might indicate an AI-generated image. As we've seen from the dramatic improvement in image and video models just in the past few months, those tells may be foolproof today and useless tomorrow.
Read more: Google's Nano Banana Pro Makes Ultrarealistic AI Images. It Scares the Hell Out of Me
If you run across an image on the internet and you have doubts about it, you should be able to go to Gemini, or Google Search, or ChatGPT, or Claude, or whatever tool you choose, and have it do a scan for a universal, hard-to-remove digital watermark. Work toward this is happening through the Coalition for Content Provenance and Authentication, or C2PA. The result should be something that makes it easy for ordinary people to check without needing a special app or expertise. It should be available in something you use every day. And when you ask AI, it should know where to look.
We shouldn't have to guess what's real and what isn't. AI companies have a responsibility to give us a foolproof, universal reality check. Maybe this is a way forward.


