YouTube has been an excellent platform for creators and marketers, but on the other hand, it has also been the place for misinformation. But, just for the popularity and earn money, few users do not follow YouTube’s terms of service and deliver content that might not be right for the viewers.
Once the misinformation got uploaded on YouTube, the site’s algorithm uses to navigate viewers automatically from one video to another which keeps people engaged with the same genre. These online nudges can drop offline consequences on innocent users. YouTube terms and services are valuable that restrict most of the threatening content, but the platform’s ability also comes with accountability to avoid real-world harm.
Earlier this year, YouTube already removes illicit content that violates its terms of service. The problem is, when it comes to misinformation, it is difficult to define how much harm it can cause. Many conspiracy theories are evidently harmful until the time someone gets hurt. Here YouTube’s primary job is to prevent the spiral content from uploading to its platform. The site is already adding credible news sources in the search results which can provide accurate information to sensitive queries.
Moreover, YouTube can also persist in managing its recommendation algorithm to give less priority to harmful content generally. In January 2019, YouTube said, it would take strong actions against the misinformation and the person spreading the same. However, YouTube has already made a similar approach to strip ads from the videos as well as disabled comments from the videos.
Some reports and investigations show viewers are still being directed in the same old lies. They are still getting the video of the same genre that supposes to be banned. But some reviews show that YouTube has implemented quick steps against such content and deleted videos that can harm the users.