1. Haberler
  2. News
  3. Instagram actively helping spread of self-harm among teenagers, study finds | Meta

Instagram actively helping spread of self-harm among teenagers, study finds | Meta

featured


Bu içerik, yeni bir çalışmada ortaya çıkan sonuçları paylaşarak Meta’nın, Instagram platformunda kendine zarar verme içeriğinin yayılmasına yardımcı olduğunu ve bu tür içerikleri kaldırmadaki başarısızlığını eleştirmektedir. Çalışmada, Danimarkalı araştırmacılar tarafından oluşturulan özel bir kendine zarar verme ağı üzerinde yapılan deney sonucunda, Instagram’ın moderasyonunun yetersiz olduğu ve Avrupa Birliği yasalarına uymadığı belirtilmektedir. Meta’nın bu tür içerikleri kaldırmadaki başarısızlığı, gençleri olumsuz etkileyebileceği ve intihar vakalarını artırabileceği endişelerini de beraberinde getirmektedir. Ayrıca, içerikte, Instagram’ın algoritmalarının kendine zarar verme ağlarının yayılmasına aktif olarak yardım ettiği ve bu durumun ciddi sonuçları olabileceği vurgulanmaktadır. Son olarak, içerikte, gençlerin ve çocukların hayatları için bir tehlike oluşturan bu durumun ciddiyeti üzerinde durulmaktadır.
[ad 1]

#Instagram #actively #helping #spread #selfharm #among #teenagers #study #finds #Meta

Kaynak: www.theguardian.com

Meta is actively helping self-harm content to flourish on Instagram by failing to remove explicit images and encouraging those engaging with such content to befriend one another, according to a damning new study that found its moderation “extremely inadequate”.

Danish researchers created a private self-harm network on the social media platform, including fake profiles of people as young as 13 years old, in which they shared 85 pieces of self-harm-related content gradually increasing in severity, including blood, razor blades and encouragement of self-harm.

The aim of the study was to test Meta’s claim that it had significantly improved its processes for removing harmful content, which it says now uses artificial intelligence (AI). The tech company claims to remove about 99% of harmful content before it is reported.

But Digitalt Ansvar (Digital Accountability), an organisation that promotes responsible digital development, found that in the month-long experiment not a single image was removed.

A Meta spokesperson said: ‘Content that encourages self-injury is against our policies.’ Photograph: Algi Febri Sugita/ZUMA Press Wire/REX/Shutterstock

When it created its own simple AI tool to analyse the content, it was able to automatically identify 38% of the self-harm images and 88% of the most severe. This, the company said, shows that Instagram has access to technology able to address the issue but “has chosen not to implement it effectively”.

The platform’s inadequate moderation, said Digitalt Ansvar, suggests that it is not complying with EU law.

The Digital Services Act requires large digital services to identify systemic risks, including foreseeable negative consequences on physical and mental wellbeing.

A Meta spokesperson said: “Content that encourages self-injury is against our policies and we remove this content when we detect it. In the first half of 2024, we removed more than 12m pieces related to suicide and self-injury on Instagram, 99% of which we proactively took down.

“Earlier this year, we launched Instagram Teen Accounts, which will place teenagers into the strictest setting of our sensitive content control, so they’re even less likely to be recommended sensitive content and in many cases we hide this content altogether.”

The Danish study, however, found that rather than attempt to shut down the self-harm network, Instagram’s algorithm was actively helping it to expand. The research suggested that 13-year-olds become friends with all members of the self-harm group after they were connected with one of its members.

This, the study said, “suggests that Instagram’s algorithm actively contributes to the formation and spread of self-harm networks”.

Speaking to the Observer, Ask Hesby Holm, chief executive of Digitalt Ansvar, said the company was shocked by the results, having thought that, as the images it shared increased in severity, they would set off alarm bells on the platform.

“We thought that when we did this gradually, we would hit the threshold where AI or other tools would recognise or identify these images,” he said. “But big surprise – they didn’t.”

He added: “That was worrying because we thought that they had some kind of machinery trying to figure out and identify this content.”

skip past newsletter promotion

Failing to moderate self-harm images can result in “severe consequences”, he said. “This is highly associated with suicide. So if there’s nobody flagging or doing anything about these groups, they go unknown to parents, authorities, those who can help support.” Meta, he believes, does not moderate small private groups, such as the one his company created, in order to maintain high traffic and engagement. “We don’t know if they moderate bigger groups, but the problem is self-harming groups are small,” he said.

Psychologist Lotte Rubæk left a global expert group after accusing Meta of ‘turning a blind eye’ to harmful Instagram content. Photograph: Linda Kastrup/Ritzau Scanpix/Alamy

Lotte Rubæk, a leading psychologist who left Meta’s global expert group on suicide prevention in March after accusing it of “turning a blind eye” to harmful Instagram content, said while she was not surprised by the overall findings, she was shocked to see that they did not remove the most explicit content.

“I wouldn’t have thought that it would be zero out of 85 posts that they removed,” she said. “I hoped that it would be better.

“They have repeatedly said in the media that all the time they are improving their technology and that they have the best engineers in the world. This proves, even though on a small scale, that this is not true.”

Rubæk said Meta’s failure to remove images of self-harm from its platforms was “triggering” vulnerable young women and girls to further harm themselves and contributing to rising suicide figures.

Since she left the global expert group, she said the situation on the platform has only worsened, the impact of which is plain to see in her patients.

The issue of self-harm on Instagram, she said, is a matter of life and death for young children and teenagers. “And somehow that’s just collateral damage to them on the way to making money and profit on their platforms.”

Instagram actively helping spread of self-harm among teenagers, study finds | Meta
Yorum Yap

Yorumlar kapalı.