
You can spot deepfakes by looking into their eyes, new study shows
Deepfakes are running rampant, now more than ever, affecting lives, tarnishing reputations, and spreading a great deal of misinformation.
You can’t blame yourself for falling victim to fake images created by AI, but you can spot them with an easy test, which researchers think increases the ability to tell deepfakes apart.

Study discovers test to spot deepfakes
For the unversed, deepfakes refer to visuals in which the bodies and faces of people are morphed to make them look like someone else.
Celebrities and prominent personalities usually become targets of the technology as their deepfakes are circulated online to spread false information.
The recreated visuals share a striking resemblance with the real image, so it’s hard to differentiate between the original and the fake. However, researchers have found a drawback in the malicious practice.
- EXPERTS EXPLAIN: TikTok and Reddit algorithms ‘distort and change’ comment sections
Research shared at the Royal Astronomical Society’s National Astronomy Meeting in Hull that “AI-generated fakes can be spotted by analyzing human eyes in the same way that astronomers study pictures of galaxies,” ScienceDaily reports.
The test is simple – if the reflection in the person’s eyeballs matches, the image is likely of a real human. If you spot inconsistency in the reflection, they are probably deepfakes.
“The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person,” said Kevin Pimbblet, professor of astrophysics and director of the Centre of Excellence for Data Science, Artificial Intelligence, and Modelling at the University of Hull.
Fake images are more than just inconsistency
Even though inconsistency in the eyeballs increases the probability of a video being deepfake, it can’t solely determine the authenticity of the same.
“There are false positives and false negatives; it’s not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes,” said Professor Kevin.
While the new findings of the method “typically used in astronomy” to analyze the light distribution “is not a silver bullet for detecting fake images”, it sure acts as a “plan of attack” in the fight against the spread of misinformation.