Despite serving as an online cooler for journalists, politicians and venture capitalists, Twitter isn’t the most profitable social network on the block. Avg internal upheavals and increased pressure from investors to make more money, Twitter has reportedly considered monetizing adult content.
Twitter’s attempt to monetize porn reportedly
According to a report by The Verge, Twitter was poised to become a competitor to OnlyFans by allowing adult creators to sell subscriptions to the social media platform. This idea may sound strange at first, but it’s actually not that strange – some adult creators already rely on Twitter as a means of promoting their OnlyFans accounts, since Twitter is one of the only major platforms where posting porn does not infringe the instructions.
But Twitter apparently halted that project after an 84-employee “red team” set up to test the product for security flaws found that Twitter was unable to detect child sexual abuse material (CSAM) and non-consensual nudity at scale. Twitter also had no tools to verify that creators and users of adult content were over 18 years of age. According to the report, the Twitter Health team has been alerting higher-ups about the platform’s CSAM issue since February 2021.
To detect such content, Twitter uses a Microsoft-developed database called PhotoDNA, which helps platforms quickly identify and remove known CSAMs. But if a piece of CSAM is not already part of this database, newer or digitally altered images can evade detection.
“You see people saying, ‘Well, Twitter is doing a bad job,'” said Matthew Green, an associate professor at the Johns Hopkins Information Security Institute. “And then it turns out that Twitter uses the same PhotoDNA scanning technology that almost everyone else uses.”
Twitter’s annual revenue — approx 5 billion dollars in 2021 — is small compared to a company like Google, which earned $257 billion in revenue last year. Google has the financial means to develop more sophisticated technology to identify CSAM, but these machine learning-powered mechanisms are not foolproof. Meta also uses Google Content Safety API to detect CSAM.
“This new kind of experimental technology is not an industry standard,” Green explained.
In one recent case, a father noticed that his toddler’s genitals were swollen and painful, so he contacted his son’s doctor. Before a telemedicine class, the father sent pictures of his son’s infection to the doctor. Google’s content moderation systems flagged these medical images as CSAM, blocking the father from all his Google accounts. The police were alerted and began investigating the father, but ironically they were unable to contact him as his Google Fi phone number had been disconnected.
“These tools are powerful because they can find new things, but they’re also prone to error,” Green told TechCrunch. “Machine learning doesn’t know the difference between sending something to your doctor and actually child sexual abuse.”
Although this type of technology is used to protect children from exploitation, critics worry that the cost of this protection – mass surveillance and scanning of personal data – is too high. An apple planned to launch its own CSAM detection technology called NeuralHash last year, but the product was scrapped after security experts and privacy advocates pointed out that the technology could easily be abused by government authorities.
“Systems like this could report vulnerable minorities, including LGBT parents, in places where police and community members are unfriendly,” wrote Joe Mullin, a policy analyst for the Electronic Frontier Foundation, in blog post. “Google’s system may wrongly report parents to authorities in autocratic countries or places with corrupt police where wrongly accused parents cannot be assured of due process.”
This is not to say that social platforms cannot do more to protect children from exploitation. Until February, Twitter had no way for users to flag content containing CSAM, which meant that some of the website’s most harmful content could remain online for long periods of time after users reported it. Last year, two people sued Twitter for allegedly profit excluding videos that were recorded of them as teenage victims of sex trafficking; the case is headed to the US Court of Appeals for the Ninth Circuit. In that case, the plaintiffs allege that Twitter failed to remove the videos when it was notified of them. The videos have garnered over 167,000 views.
Twitter’s attempt to monetize porn reportedly
Twitter faces a tough problem: The platform is big enough that detecting all CSAMs is nearly impossible, but it doesn’t make enough money to invest in more robust safeguards. According to The Verge report, the potential acquisition of Twitter by Elon Musk also affected the priorities of the company’s health and safety teams. Last week, Twitter reportedly reorganized its health team to focus instead on identifying spam accounts — Musk has vehemently argued that Twitter is lying about the proliferation of bots on the platform, citing that as the reason he wants to end the $44 billion deal.
“Everything that Twitter does that is good or bad will now be weighed in light of ‘How does it affect the process [with Musk]?” Green said. “There could be billions of dollars at stake.”
Twitter did not respond to TechCrunch’s request for comment.