How do dating apps fight against (cyber) sexual violence?

0 18

With the health crisis and its constraints (confinement, curfew), dating applications and platforms have seen their attendance increase. But at a time when many of them now offer video conversations, how do they protect their users? What filters and supports can be put in place to better combat harassment?

Much of 2020 having been spent between four walls, dating applications and platforms have not been empty since the start of the Covid-19 pandemic. Tinder announced that it recorded on March 29, 2020 more than 3 billion swipes in a single day, its world record .

Most of these platforms have “taken advantage” of the pandemic to implement new discussion functionalities, thus attempting to compensate for the lack of sociality felt by everyone. Problem: it has been several years that these services have been singled out for their lack of sanctions in the event of discriminatory or inappropriate remarks, or of sending explicit photos (the famous dickpics ).

In 2019, an investigation by Buzzfeed, Pro Publica and Columbia Journalism Investigations (CJI) revealed the presence of identified sexual assailants on dating apps Tinder and OkCupid, both belonging to the American group Match. In 2016, another investigation, conducted by the UK National Crime Agency, found a 450% increase in five years in physical sexual assaults involving a dating app. “  It is increasing as the use of dating apps increases. Predators are where they can hunt  , ”said Emmanuelle Piet, president of the Feminist Collective Against Rape (CFCV). All the more so since this violence is found both on platforms (sexual harassment, inappropriate photos and comments) and in the “real” meetings that follow.

Checking offensive profiles and messages

However, in recent months, the dating giants have redoubled new features aimed at improving the security of their users: the American group Match has started working with associations fighting against sexual violence , while Tinder has acquired in the United States, an “alert” button designed in partnership with Noonlight, an emergency response and personal safety service.

Mika Baumeister on Unsplash

For Clémentine Lalande, CEO and founder of the French slow dating application Once, applications are “  unfortunately tempting places for some criminals  ”. However, applications still have some leeway. Clémentine mentions in particular the five “layers” of security. First of all, moderation of profiles: "  We train algorithms that will detect undesirable things, such as nudity or weapons  ". At Tinder, we are told to have set up a “Bio Warning” system: a warning system intended for users whose descriptions do not comply with the rules of the application.

At Once, beyond identity verification (by confirming your profile by SMS or email for example), it is also possible to verify certain connection and activity information. "  If we have a person who likes everyone and who is connected a large number of hours during the day, that can be suspect,  " explains Clémentine Lalande.

"When there are reports, the profiles should be deleted automatically"

Emmanuelle Piet, president of the Feminist Collective Against Rape (CFCV)

On the Tinder side, the application explains to us that it has experimented in around fifteen countries with a new feature called "Does This Bother You?" ("Does this bother you?"): This question is asked of users when they receive a potentially offensive message. When the person answers "Yes", they have the option to report the sender for their behavior. According to the app, this feature has increased harassment-related reports by 37% in countries where it has been implemented.

But it is precisely the lack of moderation following reports that is often singled out. "  When there are reports, the profiles should be deleted automatically,  " said Emmanuelle Piet. Tinder says it has strengthened its security center and reporting process, while at Once, “  there are five places in the app where you can report. This can range from automatic ban to manual moderation  , ”explains Clémentine Lalande.

What responsibilities for apps?

According to a YouGov survey for Once from January 2020, 50% of women under 35 surveyed have already received explicit unsolicited content on dating apps. “  It's astonishing as a statistic. As women we are destitute: what can we do apart from reporting?  »Asks Clémentine Lalande.

"We pass our filter, and if we detect a naked photo in the messages, we replace it with a photo of a kitten"

Clémentine Lalande, CEO of the Once application

If many dating applications have decided, quite simply, to remove the option to send images to get around this problem, the founder of Once chose to respond with humor, after having received a photo herself. penis. “  We pass our filter, and if we detect a naked photo in the messages, we replace it with a photo of a kitten. There are 0.005% of posts per month that are moderate, but just for those, it's worth it ,  she concludes.

One of the major innovations implemented on many dating applications in 2020 is the arrival of video chats within interfaces. Except that, since these discussions take place live, it is difficult, if not impossible for applications to moderate them. Tinder, for example, relies on the mutual agreement of the two people who will converse on video, ensuring not to record the conversations . The application also claimed to have increased its moderation flows, and relies on reports from its users. Terms that question, while the application has already experienced security breaches in the past .

According to the founder of Once, applications must provide means of protection against aggressive or inappropriate behavior. "  We cannot, when we have an app that brings together hundreds of millions of people, shirk our responsibilities,  " said Clémentine Lalande. For example, the Bumble application has announced that it has banned all malicious comments related to the body of people to fight against validism, grossophobia or even transphobia, through the search for keywords and terms. Protection models that are not limited to algorithmic moderation. Because in matters of human relations, love and sexuality, it is difficult to forget the human behind the screen.

1
$ 0.00

Comments