What Image Search Does Catfish Use is a common question for people that want to prevent them from making a mistake. Check out this post to find out more.
What Image Search Does Catfish Use?
Catfish analyzes photographs using a face recognition software program based on AI technology that reverses searches on the sample image against millions of prints in its databases to identify people’s identities, match the exact photo, and expose the source.
The program used by Catfish is a reverse image search. It is not a reverse image search engine like Google Image Search or TinEye.
Reverse search engines like Google Image Search or TinEye can search images for similar images in its database. It will show you all the possible results that are similar to the photo you have uploaded.
For example, if we upload a photo of a dog to Google Image Search, it will return a list of other images similar to the picture you uploaded.
However, when we upload a photo of Kim Kardashian to Google Image Search, it will not return any images similar to the original. It will only show you photos that contain Kim Kardashian in them.
But with Catfish’s image analysis software does not use this method. It analyzes the images and returns results based on who is in the photo and where it was taken.
How Did They Do It?
When uploading a photo, Catfish analyzes the photo and returns results based on what people are in the photo and where it was taken.
For example, when we upload a selfie of ourselves to Catfish, it will return results of people who look like us and photos taken at the same place as our selfie. If we took our selfies at Times Square in New York City, it will return people from Times Square in New York City and pictures from Times Square in New York City itself.
If we took our selfies at Disneyland in California, then it will return people from Disneyland in California and pictures from Disneyland in California itself.
The program used by Catfish is based on artificial intelligence technology developed by Microsoft Research Asia called DeepFace.
DeepFace can recognize faces with 97% accuracy and identify facial features better than humans. The four percent error rate compared to human’s two percent error rate. Also, this tool can even identify faces in photographs with 70% accuracy when there is no frontal lighting or facial hair (2% error rate than humans’ 1% error rate).
DeepFace uses an algorithm called deep learning, a type of machine learning technology that has shown itself able to teach itself new things with little input from humans. It is also able to learn more efficiently than previous algorithms because it can “see” how things change over time.
For example, if it learns how a person’s face changes over time, it can compare it to other people’s faces. And see the similarities. DeepFace can also learn and recognize faces even when they are at an angle or with a different pose than it has seen before.