Hateful and offensive comments on social media platforms have an impact not only on the users but also on the companies that run these platforms. Unfortunately, hatred’s devastation is nothing new. However, new communication technologies have increased its scope and impact.
To combat abusive remarks in the comments section, YouTube, the world’s largest online video sharing and social media platform, is introducing a new feature that will encourage commenters to reconsider their hateful and offensive remarks before posting them.
According to a report by TechCrunch, YouTube will also begin testing a filter that will allow creators to avoid having to read some of the hurtful comments on their channel that had been automatically held for review. The new features are intended to address long-standing issues with the quality of comments on YouTube’s platform, which creators have been complaining about for years.
According to a post signed by “Rob” at TeamYouTube, YouTube will begin warning users when their comments are spotted and removed for violating the company’s guidelines.
“The YouTube team has been working on improving our automated detection systems and machine learning models to identify and remove spam. In fact, they have removed over 1.1 billion spam comments in the first six months of 2022,” the post mentioned.
As spammers change their tactics, our machine learning models are continuously improving to better detect the new types of spam.
“We’ve improved our spambot detection to keep bots out of live chats. We know bots negatively impact the live streaming experience, as the live chat is a great way to engage and connect with other users and creators. This update should make live streaming a better experience for everyone,” Rob writes in the post.
Click for more trending news
Featured Video Of The Day
Video: Man Run Over By Bus In Mumbai, Miraculously Escapes Unhurt