Despite bans, Giphy still hosts self-harm, hate speech and child sex abuse content

INSUBCONTINENT EXCLUSIVE:
But despite its ban on illicit content, the site is littered with self-harm and child sex abuse imagery, TechCrunch has learned.A new report
the popular GIF-sharing community, including illegal child abuse content, depictions of rape and other toxic imagery associated with topics
like white supremacy and hate speech
The report, shared exclusively with TechCrunch, also showed content encouraging viewers into unhealthy weight loss and glamorizing eating
(We did not search for terms that may have returned child sex abuse content, as doing so would be illegal.) Although Giphy blocks many
hashtags and search terms from returning results, search engines like Google and Bing still cache images with certain keywords.When we
tested using several words associated with illicit content, Giphy sometimes showed content from its own results
solutions to combat online toxicity
Through its tests, one search of illicit material returned 195 pictures on the first search page alone
The tags themselves were often innocuous in order to help users escape detection, but they served as a gateway to the toxic material.Despite
a ban on self-harm content, researchers found numerous keywords and search terms to find the banned content
We have blurred this graphic image
associated with known child exploitation sites.We are not publishing the hashtags, search terms or sites used to access the content, but we
passed on the information to the National Center for Missing and Exploited Children, a national nonprofit established by Congress to fight
authorities to report and remove it.He also expressed frustration that L1ght had not contacted Giphy with the allegations first
proprietary artificial intelligence engine to uncover illegal and other offensive content
Using that platform, the researchers can find other related content, allowing them to find vast caches of illegal or banned content that
would otherwise and for the most part go unseen.This sort of toxic content plagues online platforms, but algorithms only play a part
More tech companies are finding human moderation is critical to keeping their sites clean
But much of the focus to date has been on the larger players in the space, like Facebook, Instagram, YouTube and Twitter.Facebook, for
example, has been routinely criticized for outsourcing moderation to teams of lowly paid contractors who often struggle to cope with the
sorts of things they have to watch, even experiencing post-traumatic stress-like symptoms as a result of their work
section to guide one another to other videos to watch while making predatory remarks.Giphy and other smaller platforms have largely stayed
out of the limelight, during the past several years
But even in the case of private accounts, the abusive content was being indexed by some search engines, like Google, Bing and Yandex, which
made it easy to find
The firm also discovered that pedophiles were using Giphy as the means of spreading their materials online, including communicating with
each other and exchanging materials
through text overlays.This same process was utilized in other communities, including those associated with white supremacy, bullying, child
Last year the company was booted from Instagram for letting through racist content.Giphy is far from alone, but it is the latest example of
companies not getting it right
Earlier this year and following a tip, TechCrunch commissioned then-AntiToxin to investigate the child sex abuse imagery problem on
Under close supervision by the Israeli authorities, the company found dozens of illegal images in the results from searching certain
keywords
its search results, despite pioneering its PhotoDNA photo detection tool, which the software giant built a decade ago to identify illegal
has a commercial interest in this space, was founded a year ago to help combat online predators, bullying, hate speech, scams and more.The
company was started by former Amobee chief executive Zohar Levkovitz and cybersecurity expert Ron Porat, previously the founder of
in order to identify, analyze and predict online toxicity with near real-time accuracy.