Content media

17.5 million content items ‘activated’ on Facebook in India in May: Meta



Facebook, owned by Meta, “acted” on around 17.5 million pieces of content across 13 violation categories in India in May, according to the social media giant’s latest monthly report.

The “powered” content fell into categories such as bullying and harassment, violent and graphic content, adult nudity and sexual activity, child endangerment, dangerous organizations and individuals, and spam, among others.

Facebook took action against approximately 17.5 million pieces of content between May 1 and May 31, 2022 across multiple categories, while Meta’s photo-sharing platform Instagram “acted” nearly 4 .1 million content items across 12 categories over the same period, according to its recent release. India Monthly Report.

“Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may upset some audiences with a warning,” Meta’s report said.

Under IT rules that came into force in May last year, major digital platforms (with more than five million users) must publish periodic compliance reports every month, listing details of complaints received and actions taken. . Reports also include details about content removed or disabled through proactive monitoring, using automated tools.

Microblogging platform Twitter’s India Transparency report for June 2022 reveals that it received more than 1,500 complaints in the country through its local complaints channel between April 26, 2022 and May 25, 2022.

“In addition to the above, we have processed 115 grievances that appealed Twitter account suspensions. All have been resolved and appropriate responses have been sent,” Twitter’s report reads.

“We have not reversed any of the account suspensions based on the specifics of the situation, therefore all reported accounts remain suspended,” he added.

More than 46,500 accounts have been suspended for violating the guidelines, thanks to proactive monitoring, according to the Twitter report, noting that this data represents global actions taken, not just those related to content from India.

The government has issued a formal notice to Twitter to comply with all its past orders by July 4 or it could lose its intermediary status, which means it will be liable for all comments posted on its platform. .

WhatsApp, owned by Meta, banned more than 19 lakh Indian accounts in May, based on complaints received from users through its complaints channel and through its own breach prevention and detection mechanism, according to the recently released monthly report. through the messaging platform.

Meanwhile, in the case of Facebook, Meta’s latest report published on June 30 showed that of the 17.5 million plays operated, 3.7 million were in the category of violent and graphic content, 2.6 million in the adult nudity and sexual activity category, while 9.3 million were for spam.

Some of the other categories in which content was “questioned” included bullying and harassment (294,500), suicide and self-harm (482,000), dangerous organizations and individuals – terrorism (106,500) and dangerous organizations and individuals – organized hatred (4,300) .

Meta’s report contains information for a period of 31 days on actions taken against content infringement on Facebook and Instagram for user-generated content in India and proactive detection rates, as well as information on complaints received users in the country through the complaint mechanisms.

For Facebook, he said, “Between May 1 and May 31, we received 835 reports through India’s grievance mechanism, and we responded to 100% of those 835 reports.”

“Of these incoming reports, we provided tools for users to resolve their issues in 564 cases. These include pre-established channels to report specific violation content, self-remediation streams where they can upload their data, ways to fix account takeover issues, etc.”

For Instagram, in May, 13,869 reports were received through India’s grievance mechanism, and the platform responded to 100% of reports.

Among these incoming reports, it provided tools for users to solve their problems in 4,693 cases.

“Of the remaining 9,173 reports requiring specialist review, we reviewed the content in accordance with our policies and took action on a total of 5,770 reports,” he added.

The government is finalizing new social media rules that propose to arm users with a grievance appeal mechanism against arbitrary content moderation, inaction or takedown decisions by big tech companies.

Last month, the IT Department released draft rules that propose a government panel to hear user appeals against complaint inaction or content-related decisions made by platform complaints officers. social media.

At present, “there is no appeal mechanism provided by intermediaries nor a credible self-regulatory mechanism in place,” the IT Department had said.

Major social media platforms have drawn attention in the past to hate speech, misinformation and fake news circulating on their platforms. Concerns have also been raised about digital platforms acting arbitrarily by extracting content and “de-platforming” users.

The government notified IT rules last year to make digital intermediaries more accountable for content hosted on their platforms.

(Only the title and image of this report may have been edited by Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)