Welcome to the "After Explainability - AI Metaphors and Materialisations Beyond Transparency" Workshop Digital Notepad! This notepad is designed for all participants to post their questions and share insights throughout our workshop. Please feel free to add any questions you may have during the sessions, and our speakers will address them during the Q&A segments or we will discuss them during the discussion sessions. If you need any assistance or have specific inquiries, please contact Goda Klumbytė via email at goda.klumbyte@uni-kassel.de We encourage you to use this space to engage actively with the workshop content and your fellow participants. Your questions and contributions are valuable to us! ---------------------------------------------- ---------------------------------------------- Thank you for your participation! If you want to connect, please feel free to leave your name and contact details here. We would also be happy if you want to reach out to us by emailing goda.klumbyte@uni-kassel.de ---------------------------------------------- DAY 2: Welcome back! Session 2: 14:00 – 14:30 Conrad Moriarty-Cole, Bath Spa University, “The Machinic Imaginary” Comments: * Very short addition, sorry, a bit off-topic, but then not really: The concept of "Habsburg AI" was coined by my colleague Jathan Sadowski :) * Questions: * 14:30 – 15:00 Nelly Yaa Pinkrah, TU Dresden, “Opacity” Comments: * Questions: * Alex: Sorry for a question from afar – The thing I try to remain attentive to is how bodies are continually being constituted, always in the making. So to alienate and to make alien is to ‘do’ a body (as Corad examines, we want to understand the machinery involved). I understand Nelly’s wonderful thoughts on opacity as a lesson in how these bodily becomings have their historical conditions. Is that fair? What would being resistant to these histories involve? I’m reminded of Saidiya Hartman’s attempts to tell an ‘impossible story and to amplify the impossibility of its telling’. Thank you Goda! 15:00 – 15:30 Discussion Comments: * Alex: Very sorry not to be more active today. Catching most of the talks on my train ride. So impressed with all the talks and the range. They show the manay worlds and cuts, and what these make possible when placed together. Thank you! * (Eugenia) my question is still a bit messy, but I struggle with grapsing the term "alienation" in this very non-anthropcentric debate/context of machinic imaginary vision. Is the point to define "machinic" independent from the human or is it about the human's imaginary of the machinic (then social and machinic overlaps?)? (i.e. contrary to "algorithmic thought" we cannot acsess) Second, human/tech seem to still be diametric/dual when talking about alienation. One reason I struggle with it, is that I have this argument with most ethics people too who are very anthropcentric, believing that everything "AI/tech" is totally non-human and think this is "bad" (hence, alienating us from ourselves), but it's the power dynamics we hide/disclose wihtin the machinic that alienate, or? * (Conrad) I don't think we need to necessarily place a value judgment on the alienation of the machinic imaginary, rather it is a descriptive exercise to highlight the general alienated condition of our finite phenomenological/aesthetic mode of being in the world. The machinic imaginary is but one instance of our alienation from existance. The critique is aimed at volunatristic undestandings of politics as opposed to a critique of the alienation itself. * (Eugenia) Thanks, makes sense, and very interesting to think about that, so great work.... maybe I got caught up in thinking it too materialistically and then I dont know where alienation starts (needs an origin? but material thinking is more loopy ;) * You are certainly onto something with that final point about the power dynamics that are hidden by the alienation, and as you say, we need to think recursively in terms of how these different levels of alienation (existential, economic, etc) interact and amplify one another. Thank you for your comment! * (Rachael: trying to articulate the question....) One of my collegues here in Sweden examines (excuse my very simplistic description) political discourses in how they constitute certain peoples and populations (bodies) as "radical others" (aliens) who threaten the hegemonic status quo. I am wondering to what extent this is a machinic imaginary/alienation that sort of arises because of our radical alterity to technology and to what extent ideas of radical alterity are socially and politically constructed and "the machine" is entangled within those power/structural relations? Also, when will this monograph be published? - sounds fascinating! * Reminded me of this article: https://digitaltalkingdrum.com/2017/08/15/against-black-inclusion-in-facial-recognition/ (Lisa) * Questions: * ---------------------------------------------- 12:30 – 14:00 LUNCH BREAK ---------------------------------------------- Session 1: 11:00 – 11:30 Rachael Garrett, KTH Royal Institute of Technology, “Felt Ethics” Comments: * Interesting reference potentially on feeling machines: Man, K., Damasio, A. Homeostasis and soft robotics in the design of feeling machines. Nat Mach Intell 1, 446–452 (2019). https://doi.org/10.1038/s42256-019-0103-7 * No actual question, but I loved the presentation design!! Thanks for that :) (Eugenia) Questions: * 11:30 – 12:00 Goda Klumbytė, University of Kassel, and Dominik Schindler, Imperial College London, “Experiential Heuristics” Comments: * Luise: Something I once learned in environmental psychology: With the adding of a third variable in an interactional System, complexity becomes to high to be estimated or handled by humans > *Forrester. Siehe auch: Dörner, 1993; Gardner & Stern, 2002; * Questions: * Claude: Would Participatory Design practices work for co-creating heuristics? * 12:00 – 12:30 Discussion Comments: * The Who in XAI: How AI Background Shapes Perceptions of AI Explanations: https://dl.acm.org/doi/fullHtml/10.1145/3613904.3642474 * Questions: * ---------------------------------------------- ---------------------------------------------- DAY 1: Thank you for your participation in the first day of the workshop! We look forward to seeing you tomorrow for the second part! Session 3: 15:05 - 16:00 General Discussion Comments: * Self-Consuming Generative Models Go MAD: https://arxiv.org/abs/2307.01850 * On the topic of fictions, and was reminded of Saidiya Hartman's Critical Fabulations with Arif's mention of the impossible “I intended both to tell an impossible story and to amplify the impossibility of its telling.“ Is it fair to say that when confronted with the impossible then all we have left is stories? Can we imagine stories of other ways to live with AI? That aren't so all encompassing and seek omnipresence? Questions: * ---------------------------------------------- 15:00 - 15:05 SMALL BREAK ---------------------------------------------- Session 2: 14:00 – 14:30 Arif Kornweitz, HfG Karlsruhe, “Accountability“ Video Link: https://www.palantir.com/impact/world-food-programme/ Comments: * This work relates, I think, to Eugenia's point, that Accountability (and similar) gets raised as a goal, but without adequate thinking of the purpose and outcomes, * Questions: * 14:30 – 15:00 Discussion Comments: * Questions: * ---------------------------------------------- 12:45 – 14:00 LUNCH BREAK ---------------------------------------------- Session 1: 11:00 – 11:30 Eugenia Stamboliev, University of Vienna, “Post-Critical AI Literacy” Comments: * Love the metaphor of the table, and who should be sat at it... and the ways this links to democracy and empowerment. * An equivalent to the Dutch benefit fraud detection, in the UK, is the secondary school exam prediction fiasco during the panademic (and the post office scandal) * From an educational perspective this was very interesting for thinking about AI pedagogy for non-technical students (especially humanites, arts, socia science students), thank you. Questions: * [For later] Have you encountered ‘publics’ that are disinterested in AI and thus the goals of literacy, transparency, explainability, etc. mean very little? Have you thought through how you get people to the table? * You said: "get Data Scientists on my side". I am a bit worried if this is really a postcritical, including perspective. * 11:30 – 12:00 Alex Taylor, Edinburgh University, “Flows and Scale” Comments: * The Politics of Operations: Excavating Contemporary Capitalism https://www.dukeupress.edu/the-politics-of-operations * "As Mezzadra and Neilson argue, the ways in which capital draws on forms and practices of human cooperation and sociality can be seen as extraction. The authors state that “data mining and other extractive activities that prey on human sociality are ever more at the edge of capital’s expanding frontiers.” * Mezzadra and Neilson suggest to focus on capital’s operations, the processes that accomplish an end by affecting possibilities and by establishing connections. Operations refer to “specific capitalist actors and material circumstances while also being embedded in a wider network of operations and relations that involve other actors, processes, and structures.” * also happy to talk about this more (arif :) * This idea of the heirarchy of accountability, really it needs to be thought in the other direction in terms of political organising: accountability is often about a holding to account of those with the power to design, build and implement AI systems (and infrastructure more generally). For example, “corporate responsibility” is something capital has been forced into due to social pressures of holding corporations to account (even if it is only signalling responsibility discursively if not in practice). We cannot expect an entity to take accountability if it is potentially against their own interest. In the case of AI systems, what the material interests are that drive the design and implimentation is where we might see the breakdown of genuine accountability and obfuscation. Questions: * 12:00 – 12:45 Discussion Comments: * The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence * https://firstmonday.org/ojs/index.php/fm/article/view/13636 * I like the way Haraway talks about posthuman, e.g.: "Terrapolis is rich in world, inoculated against posthumanism but rich in com-post, inoculated against human exceptionalism but rich in humus, ripe for multispecies storytelling" I think she's changing the emphasis by refusing posthuman and finding other words that speak of entanglements. Compost, companion critters, humus, Terrapolis, etc. * Fernando/ I think the point is not about trusting or not trusting AI. AI is everything and AI is anything. It is a blank canvas without the humans behind. I think the point is more about trusting the corporations/business/scientists/govs behind. Thats why AI ethics is important to legislate, more than legislating AI as a technology. (We can’t choose to trust a computer, but we can choose to trust a hacker). * YEah, trust, like the other concepts we've discussed, is so dense when we think about AI. It means too many things to too many people. * Questions: *