17. november 2021

Notes from Data Feminism part II

Seminar:

On October 25th 2021 the Coordination for Gender Research invited five scholars and practitioners to step into the discussion of Data Feminism. The speakers revolved around questions of democracy, and dilemmas of freedom of speech, hate speech and its consequences for the (missing) public debate, and reflections on how we gain and circulate knowledge on the subject.  

The seminar was the second conducted on the theme. Invited was Cathrine Hasse, Nanna Thylstrup, and Analysis and Numbers [“Analyse og Tal”] represented by Tobias Bornakke and Mikkeline Thomsen. Samantha Dawn Breslin wrapped up the discussion with compiling reflections. 

Cathrine Hasse introduced the discussion by connecting Artificial Intelligence (AI) and with freedom of speech on one hand, and how to avoid hate speech on the other. She reflected upon how this dilemma affects women, and how this relates to Artificial Intelligence. Through her work as Vice President in PEN, she introduced three stories which circled around the dilemma. These were stories of women who were chased or killed for entering the public debate, people with freedom of speech who used it for hate speech on social media, and those we thought were people sharing hate speech, but were in reality not real people, but algorithms producing clickable output. When hate speech does not even come from real people, but from profiting on an algorithm that maximizes clicks, what function/ value do hate speech serve? Does it really bridge people to exchange, meet, and possibly change opinions, or is it inviting to an aggressive tone that would not else be there? How does this affect us, was the critical stance. The more extreme the (fictive) opinions are, the more are words becoming senseless, Hasse argues.

Nanna Thylstrup introduced a tour de force in data collection, moderation, and sharing. She shared reflections on what consequences it has when realities are put into numbers, and why this is important. She discussed gendered and racialized imaginaries of data infrastructure, in recent studies on how men and women were represented as truthtellers: As whistleblowers or leakers? She further discussed data sets and questions of inclusion and exclusion. Data sets can both be seen as a commodity in public or private organizations, but can at the same time be used as counterpower. Thylstrup further engaged in the panel discussion in how we could understand bias in a feministic point of view. Offensive speech that is not directed towards a group, and characterized as hate speech, is still interwoven in a structure of thinking. Statements such as “Go f*** your mom” may not be characterized as hate speech, but still has to do with ways of thinking of gender and power structures.

Tobias Bornakke & Mikkeline Thomsen represented the company Analysis and Numbers [“Analyse og Tal”] and introduced their newly report on hate speech mapping 63 million comments on social media through machine learning, examining public facebook pages of Danish politicians and media from February 2019 to February 2021. Bornakke and Thomsen discussed and distinguished hate speech from aggressive language, as the mapping should not document a restriction of certain feelings or simply impoliteness. Of all comments they found 5,2 % were attacks, and out of these 3,8 % were offensive, and 1,4 % were hate speech. What was interesting too was that the comments were directed at some groups more than others. Hate speech was especially directed towards Muslims (50 %), but also women, politicians, and people with disabilities. Little is still known about the commentators, but the report gave significant insight on the need for moderation, Thomsen and Bornakke discussed. Further we need to discuss digital education for all ages, ongoing legislation and regulation, as well as possibly AI-supported moderation.