Last month an image purporting to show children in cages as a result of current immigration policies went viral on social media, accelerated by a number of high profile journalists, activists and former government officials who shared it widely – their visibility and stature leading many to trust the image at face value without the level of suspicion and verification that users might apply to other viral images. The image was real, but taken out of context and spread virally before users began to realize it actually dated from a 2014 news article. Yet, when I first saw the image I simply right-clicked on it and ran a reverse Google Images search that immediately turned up the original 2014 source. Could social media outlets like Twitter and Facebook automate such image searches to help combat fake news at scale?
Social media today is an ocean of false and misleading information spread for nefarious purposes, but far more often by well-meaning individuals who share first and ask questions later. The ease and rapidness with which a 2014 news image went viral, made famous by the very individuals ordinarily tasked with helping to combat false information stands testament to just how easy it is for false information to spread in today’s speed-over-accuracy information ecosystem. In contrast to unverifiable citizen imagery that lacks provenance, professional news photography is particularly easy to verify, yet such ease of verification did little to slow the spread of this image.
The problem is that social media norms encourage sharing over understanding, creating an informational ecosystem in which users act more as transmission nodes, receiving and passing onwards information, than as true consumers that digest and reason about the information they receive. According to one study, 59% of links shared on social media were never actually viewed, while an increasing body of research emphasizes that in our click-happy world of social media, our social capital is dependent on being the quickest to share new information with our connections, with little incentive to take the time to actually read and digest that information to vet it first.
The mobile interfaces that dominate social media consumption today worsen this effect, entrenching the walled garden in which we consume social content and making it difficult to perform extensive research to verify a post. After all, juggling multiple browser tabs and wading through multiple websites to verify the provenance and context of an image seen on social media takes time even on a desktop, but is especially hard in the resource and screen-constrained environment of mobile devices.
On a desktop using the Google Chrome browser it is relatively trivial to right click on a questionable image, click “Search Google for image” and instantly see all of the places on the web that Google’s search engine has seen that image before. Google’s commercial Cloud Vision API goes a step further and can even OCR the image to recognize all text seen in it in 55 languages, making it possible even to fact check visual memes that contain textual quotes or statements. Even more usefully, the Cloud Vision API scans all previous appearances of the image on the web for the captions associated with the image in each case across all of the languages it supports, assigning it topical labels that summarize the most common descriptions of the image online.
Imagine if the major social media platforms like Twitter and Facebook adopted a similar reverse images search and OCR for all images shared on their platform. Every single image shared on their platforms would be compared against a database of unique images and for each new image seen for the first time, the system would perform an open web image comparison to find all previous appearances of that image online. The date the image was first seen on the web and a links to a few high-profile appearances of it would be displayed prominently under each instance of the image being shared online.
In the case of the immigration image, the photograph was shared with a link to the article it came from, which was clearly dated 2014, but when shared on Twitter and Facebook, the presentation display formats used by those platforms do not clearly and prominently emphasize the publication date of a link, meaning that all most users saw was the photograph and a citation to azcentral.com. Displaying the publication date of shared links more prominently might have slowed the spread of the image if users could immediately see that the article dated to 2014.