TikTok has announced that its midterm election center will go live on the app in the US starting today, August 17, 2022, where it will be available to users in more than 40 languages, including English and Spanish.
The new feature will allow users to access state-by-state election information, including details on how to register to vote, how to vote by mail, how to find your polling place and more, provided by TikTok partner NASS ( National Association of Secretaries of State). TikTok has also recently partnered with Ballopedia to allow users to see who is on their ballot and works with a variety of voting assistance programs – including Center for Democracy in Deaf America (for deaf voters), on Federal Voting Assistance Program (voting abroad), the Campus Voting Project (students) and Restore your voice (people with past convictions) — to provide content for certain groups. AP will continue to provide the latest election results at the Election Center.
The hub can be accessed through a number of places within the TikTok app, including by clicking on content tags found within a video, via a banner in the app’s Friends tab, and via the hashtag and search pages.
The company also detailed its broader plan to combat election disinformation on its platform, building on the lessons it learned from the 2020 election cycle. For starters, it launched this in-app election hub six weeks earlier as of 2020. It is stepping up its efforts to educate the creator community about its optional content policies as well. This will include the launch of an educational series on the creator portal and TikTok and briefings with both creators and agencies to further clarify its rules.
Much of how TikTok will deal with election disinformation hasn’t changed, however.
On the policy side, TikTok says it will monitor content that violates its guidelines. This includes misinformation about how to vote, harassment of poll workers, harmful deep-faking of candidates and incitement to violence. Depending on the violation, TikTok may remove the user’s content or account or ban the device. Additionally, TikTok may choose to redirect search terms or hashtags to its community guidelines, as it did during the previous election cycle for hashtags related to terms like “stealing stop” or “sharp gate”, by the way.
The company reiterated its decision to ban on political advertising of the platform, which extends not only to ads paid for through its ad platform, but also to branded content published by creators themselves. That means a political action committee can’t circumvent TikTok’s policies to instead pay a creator to make a TikTok video advocating their political position, the company said.
Of course, as important as the policies themselves is TikTok’s ability to enforce them.
The company says it will use a combination of automated technology and people from the trust and safety team to help make moderation decisions. The former, TikTok admits, can only go so far. Technology can be trained to identify keywords associated with conspiracy theories, but only a human would be able to tell if a video is promoting that conspiracy theory or working to debunk it. (The latter is permitted by TikTok’s guidelines.)
TikTok declined to share how many employees are dedicated to the job of moderating election misinformation, but noted that the larger trust and safety team has grown over the past few years. However, this election will be of greater importance as it follows shortly after TikTok moved its user data from the US to the Oracle cloud and now have the company audited its moderation policies and algorithmic recommendation systems.
“As part of Oracle’s work, they will regularly review and validate both our recommendations and our moderation models,” confirmed TikTok’s head of safety in the US, Eric Hahn. “This means that there will be regular audits of our content moderation processes, both from automated systems … technologies – and how we detect and rank certain things – and the content that is moderated and reviewed by humans,” he explained.
“This will help us have an extra layer and check to make sure our decisions highlight what our guidelines mandate to the community and what we want our commission guidelines to do.” And obviously this builds on previous announcements that we’ve talked about in the past in our relationship and partnership with Oracle for US consumer data storage,” Hahn said.
Optional content can be triggered for moderation in several ways. If the community flags a video on the app, it will be reviewed by TikTok teams, who may also work with third-party threat intelligence firms to detect things like coordinated activities and covert operations, such as those by foreign forces seeking to influence the US election. But the video may also be reviewed if it grows in popularity to prevent TikTok’s main feed — its For You feed — from spreading false or misleading information. While videos are rated by fact-checking, they are not eligible for referral to the For You feed, notes TikTok.
The company says it now works with a dozen fact-checking partners around the world, supporting more than 30 languages. US-based partners include PolitiFact, Scientific feedback and Leading stories. When these companies identify a video as fake, TikTok says the video will be taken down. If it comes back as “unverified” — meaning the fact-checker can’t make a decision — TikTok will reduce its visibility. Unverified content cannot be promoted in the For You feed and will receive a label indicating that the content cannot be verified. If a user tries to share the video, they will displays a popup asking them if they are sure they want to post the video. These types of tools have been proven to influence user behavior. TikTok said during tests of its unverified tags in the US that videos, for example, saw a 24% drop in share rates.
Additionally, all election-related videos — including those from politicians, candidates, political parties or government accounts — will be labeled with a link that redirects to the election center in the app. TikTok will also host PSAs for election-related hashtags such as #midterms and #elections2022.
TikTok symbolizes a new era of social media compared to longtime leaders like Facebook and YouTube, but it’s already repeating some of the same mistakes. The short-form social platform wasn’t around during Facebook’s 2016 Russian election scandal, but it’s not immune from the same misinformation and disinformation concerns that plagued more traditional social platforms.
Like other social networks, TikTok relies on a combination of human and automated moderation to detect harmful content at scale — and like its peers, it relies too heavily on the latter. TikTok also outlines its content moderation policies in lengthy blog posts, but sometimes falls short of its own lofty promises.
In 2020 a report by the watchdog group Media Matters for America found that 11 popular videos promoting false election conspiracies in support of Trump attracted more than 200,000 combined views within a day of the US presidential election. The group noted that the selection of misleading posts was just a “small sample” of election disinformation widespread on the app at the time.
As TikTok grows in popularity and mainstream adoption beyond the viral dance videos and early Gen Z users it’s known for, the misinformation problem will only get worse. The app has grown rapidly over the past few years, reaching three billion downloads by mid-2021 with predictions that it will crosses the 750 million user mark in 2022
This year, TikTok has emerged as an unlikely but vital source of real-time updates and open-source intelligence on the war in Ukraine, occupying such a prominent position in the information ecosystem that The White House decided to brief a handful of star creatives about the conflict.
But because TikTok is entirely focused on a video app that lacks the sought-after text of a Facebook or Twitter post, tracking how misinformation travels within the app is a challenge. And like the secret algorithms that drive hit content on other social networks, TikTok’s ranking system is sealed in a black box, hiding the forces that propel some videos to viral heights while others falter.
Explorer studying the Kenyan information ecosystem with the Mozilla Foundation found that TikTok is emerging as an alarming vector of political disinformation in the country. “While more mature platforms like Facebook and Twitter get the most scrutiny in this regard, TikTok has largely remained under scrutiny — despite hosting some of the most dramatic disinformation campaigns,” Mozilla contributor Odanga Madung wrote. He described a platform “filled” with misleading claims about Kenya’s general elections that could inspire political violence there.
Mozilla researchers had similar concerns ahead of the 2021 German federal electionfinding that the company was slow to implement fact-checking and failed to detect a number of popular accounts impersonating German politicians.
TikTok can also have played an instrumental role in the elevation of a dictator’s son to the presidency of the Philippines. Earlier this year, the campaign of Ferdinand “Bongbong” Marcos Jr. flooded the social network with flattering posts and bought influencers to rewrite a brutal family legacy.
Although TikTok and Oracle are already engaged in some kind of audit agreement, the details of how this will happen have not been disclosed, nor to what extent Oracle’s findings will be made public. This means we may not know for a while how well TikTok will be able to keep election misinformation under control.