Listeners in the Room
a conversation between Femke Snelting and Katía Truijen (Rotterdam, July 22)

KT: During the retreat in Arnhem, Matthew (Plummer-Fernandez) created the @InstitutesOf bot that generates titles for institutes. You were interested in making a critical fork, to do a close reading of the code that Matthew wrote. Reading and commenting on code is a recurring practice in your work.

FS: In the projects of Constant, the organization I work with, we often try to make what we call meta-comments. We speak about gender in a bug report, we discuss ethnicity in a proposal for a software standard, or we try to read language habits in large data sets. We do this by making use of the writerly structures that already exist around collaborative code practice. Through its open source licensing, free software allows you to intervene and to be part of a collective and continuous process. People engage in discussions around code through mailing lists, or do bug reports to comment on technical issues. In that way it is a very discursive culture.

At Constant, we aim to involve different types of expertise in discussions around technology. We think it’s necessary to include other voices than those from engineering or computer science, as it is too limited to only confront technology with technology. Through collectively reading and commenting on different layers of code, we want to learn and test how our relations with technology are never one-way.

When Matthew told us about the @InstitutesOf bot, the idea came up that this it could be an interesting occasion to do a critical fork, a complete copy of his code but this time with comments, references and discussions added. I was curious to see if it would be possible to recognize elements of the discussions we had about computational intelligence and research, within the technological objects that script the programme or 'collage code' that Matthew wrote. Also, the idea of a bot as a generator of institutes is interesting in itself, because of the institutional forces embedded in code practice, for example through the way certain habits and power relations establish themselves over time. So it would be interesting to see how @InstitutesOf as an example of institutional critique, by creating ‘institutions’ through code, might already have it’s own institutional habits.

KT: Currently, there is an ongoing stream of news about algorithmic flaws and machine learning algorithms showing discriminatory behavior. For example Amazon's same-day delivery service, that excluded certain ZIP codes in predominantly black neighborhoods. Reports about these incidents often call for a more critical engagement with algorithmic culture, emphasizing the importance of the design of machine learning algorithms.

FS: But the "algorithmic hype" and, the craze of using the a-word for anything related to contemporary computation, also means that the complexity of this technology is confirmed over and over again. As a way of distancing ourselves from what is actually going on. Of course there are many technologies that are beyond the understanding of many of us, but there are also surprisingly mundane, repetitive and even silly aspects to them. The complications often come from the layering of simple assumptions. I think it’s important to decide to not be scared away.

KT: So the challenge is to find ways or tactics that can help to align ourselves with technologies. I read about a recent workshop initiated by Constant, where you were categorizing phrases as being paternalistic or not. This approach seemed to offer an interesting entry point to learn more about algorithmic thinking and machine learning.

FS: About a year ago, we organized this session with activists, artists and researchers to learn about and to work with text data mining. A computational linguistics professor from Antwerp introduced us to Pattern, a text mining module for the Python programming language. We learned that text data mining technologies are based on optimizing a small seed of knowledge, to be scaled up to analyze large sets of data. Small test samples, or so-called "Golden Standards" function as benchmarks if they work well, in order for other data not to be analyzed by algorithms, apparently not by humans anymore. However, the initial human decision-making process is still central to how these algorithms extrapolate knowledge.

We tried to find out as much as we could about these Golden Standards, and under which conditions they are being developed. Not surprisingly, they are often created by underpaid students or mechanical turk workers, who are basically bombarded with data and paid for their quick classification. For example, they do sentiment analysis by rating sentences on the level of anger that is being expressed. In this process, clichés emerge and are reaffirmed, because people don’t have time to consider their decision. Anything that is ambiguous or unclear is removed. First on the level of - positive or negative - classification, and second, if there is disagreement between people who rate the same sentence, the input does not count. Only material without ambiguity will pass through.

We were asking ourselves what these type of processes mean in terms of knowledge production. We decided to classify "paternalism” in a data set, something as ambiguous as can be. So we simulated a scientific process, by developing a Golden Standard, but allowing for debate and offering people the time to make a decision, in order to counter the efficiency drive of text mining technologies.

KT: Often when new technologies or applications are developed, they get analyzed or criticized, but once we are immersed, they blend into the background and analysis or intervention seems to stop. You are persistent in not using certain software applications like Gmail, or devices like a smartphone.
FS: This is part of our tactical approach. Testing out other ways of using technologies is an important element in our research practice at Constant. It may seem like a minor difference, but a lot happens when technological habits get questioned. You stop Instead of using technologies because they are convenient, but you use them because they raise interesting questions.

KT: You also actively intervene when new technologies or standards are developed.

FS: Currently, I am following the process of the implementation of emoji through Unicode. Me and my colleagues were really surprised by the way they have implemented "skin tone modifiers", as a response to a call for more diversity in the set of emoji. While calling it universal, they have actually introduced a racist system. As a group, we tried to intervene by responding to the public call for comments, and we investigate the decision-making process at Unicode by a close reading of reports of meetings and press releases, while writing and presenting about our findings. Through these meta-comments, we try to enter into dialogue with something that presents itself as open, mutable and of the people.

KT: During conversations about the agency and behavior of computational entities at the retreat, it appeared to be difficult to move away from a human centered perspective. At some point, you introduced the idea of the “algorithmic gaze”.

FS: I think this is a notion I borrowed from a colleague at Constant, who is working on a long term project on computer vision, and how image recognition could be understood as algorithmic gaze. By not only looking at the effects and the politics of algorithms, but to read them as radically other forms of seeing. Of course humans are not uninvolved, but it’s too easy to think that they completely define this gaze. This is a difficult exercise though, because it means to try to imagine a world in a post-humanist sense, in which the human is not always at the center. And then to think what kind of relations we could have with this other gaze.

During the retreat, we figured that some of the discussions we know from dealing with difference and otherness suddenly became very useful. We talked about the different levels of awkwardness that sometimes emerge during group conversations. The assumption that we are all the same, makes the fact that you are or it is not, very difficult to handle, by questioning the assumed sameness through difference. This can be awkward or painful, both for those who assume to be the same, and those who announce to be different.

From the idea of the algorithmic gaze, as something different and beyond our understanding, we imagined how an algorithmic research entity could exist as an agency without feelings. As a computational agent that could be different without feeling pain or awkwardness. We were interested in exploring what this would mean in a social situation, and how such an agent could help to break through the assumed togetherness. And the types of research and knowledge that would be produced. What kind of relations would then emerge? How would this computational agent allow to reflect or deflect work between humans? In a way, we were trying to see the algorithmic processes that were already present in the room and in our conversations.

KT: I find it interesting that, during the retreat, we continuously adapted our environment to the kind of conversations we were having, as different types of chat rooms. The kitchen and the forest allowed for one-to-one conversations, the living room and the courtyard were used for plenary discussions, while the park and the café allowed us to talk in smaller groups. We often used spatial metaphors, such as the garden or the dance floor to describe different types of relations between agents, both human and non-human. You approached the idea of the algorithmic research entity as an actual "listener in the room".

FS: In fact, we had already invited this stranger in our midst, as it was central to our discussions. In order to test some of our intuitions and to learn more about this algorithmic gaze, we generated two automatic transcriptions of the same conversation in which an awkward social moment took place. Interestingly, this moment was completely missed and erased in both of the transcriptions, but in two different ways. Because we were there and we know how the technology works, we can reverse-engineer what must have happened. But if you would not have attended the meeting, you would never recognize that there was an awkward moment in that discussion.

KT: So all the time, there were different non-human listeners in the room.

FS: That is were it becomes interesting. To not take these automatic transcriptions as misrepresentations of what happened, but to approach the computational agents as actual listeners. What is beautiful about the two transcriptions, is that they show two different readings of a situation, which then doesn’t essentialize the technology as a something. Instead, they operated as different characters, each with their own kind of presence.