YouTube will soon launch a 'Go Live Together' co-streaming feature for select creators

Toxic and hateful comments on YouTube are a constant headache for the company, creators and users. The company has previously tried to limit this by introducing features such as displaying an alert for individuals at the time of posting so they can be more careful. Now, the streaming service is rolling out a new feature that will more aggressively push such individuals for their offensive comments and take broader action.

YouTube says it will send a notification to people whose offensive comments have been removed for violation platform rules. If, despite receiving the notification, the user continues to post abusive comments, the service will ban him from posting more comments for 24 hours. The company said it tested the feature ahead of today’s launch and found that the notifications and waits proved largely successful.

Hateful comment detection is currently only available for comments in English, but the streaming service aims to include more languages ​​in the future. It should be noted that the pre-publication warning is available in English and Spanish.

“Our goal is both to protect creators from users trying to negatively impact the community through comments, and to offer more transparency to users who may have removed comments for rule violations, and we hope to help them understand our Community Guidelines,” the company said.

If a user feels that their comment has been wrongfully removed, they can share theirs feedback. However, the company did not say whether it would restore the comments after considering the feedback.

In addition, in forum post, YouTube said it is working on improving its AI-based detection systems. It has removed 1.1 billion “spam” comments in the first half of 2022, the company claims. YouTube also improved its system to better detect and remove bots in live chat videos, the announcement said.

YouTube and other social networks have been able to reduce spam and abusive content in part by relying on automated detection. However, abusers often use different slang or misspelled words to trick the system. What’s more, it’s harder to catch people posting hateful comments in languages ​​other than English.

The streaming company has been testing a wide range of tools in recent quarters to reduce offensive comments on the platform. These tests include hide comments by default and display the user’s comment history in their profile cards.

Last month, YouTube released a feature that allows creators hide a specific user from comments. This control applies to the entire channel, so even if a user posts hateful comments on another video, it won’t show.

Platforms worldwide are grappling with the problem of limiting the spread of hateful comments.

Instagram was a fertile ground for them when English footballers Bukayo Saka, Marcus Rashford and Jaden Sancho were harassed for missed penalties at last year’s Euro finals. A new report from GLAAD and Media Matters noted that anti-LGBTQ slurs have skyrocketed after Elon Musk took over Twitter. While all of these platforms have released tools to mute or hide comments and limit comments to certain people, the amount of hateful and abusive comments remains a huge problem.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *