On Future-Making - Undoing Predictive Algorithms

On Future-Making - Undoing Predictive Algorithms

Veranstalter
Graduiertenkolleg Medienanthropologie, Bauhaus-Universität Weimar
PLZ
99423
Ort
Weimar
Land
Deutschland
Vom - Bis
09.12.2021 - 10.12.2021
Von
Tim Othold, Kompetenzzentrum Medienanthropologie, Bauhaus-Universität Weimar

Online Workshop on predictive algorithms, questions of in/accessibility, and the making and preemptive control of future.

9th - 10th December 2021

Organized by Katia Schwerzmann, Graduiertenkolleg Media Anthropology at Bauhaus-University Weimar.

On Future-Making - Undoing Predictive Algorithms

The fundamental purpose of a large number of algorithms is to facilitate human decision-making through a calculus of risk. What governs their functioning is the preemptive logic of risk management. This logic is characteristic of late capitalist societies in which every course of action is evaluated in terms of calculated risks. In the public sector, risk assessment algorithms are deployed to “optimize” the management of “limited resources”—as in the case of the U.S. criminal justice system, which is closely tied to the prison-industrial complex. In the private sector, they are used to determine the quote for insurance policies, the attribution of credit, and the access to medical procedures. However, as Stefano Harney and Fred Moten have emphasized, the goal of neoliberal societies is not to eliminate risks per se; it is rather to modulate risks. In fact, neoliberal society’s focus on risk management consists of using risk as a motor for carefully-planned changes that submit the population—primarily the disenfranchised and racialized portion of it—to increasing contingency and flexibility. This deliberate instability renders individuals susceptible to ongoing adjustment and control.

Risk management algorithms have led to new forms of preemptive surveillance that consist of the management of in/accessibility. For instance, the borders of contested regions of the globe are drawn differently depending on the geographic position of the user on Google Maps and other online mapping tools. The preemptive temporality of surveillance is ensured through two strategies: first, by precluding access based on the geographically localized, hyper-specific ethnoracial and economic position under which an individual is categorized; and second, by making this individually specified in/accessibility invisible to the individual surveilled. The extreme specificity of in/accessibility leads to the increasing fragmentation of the world. Under these circumstances, how is it possible to create the common ground necessary for the democratic act of deciding together about a shared world?

Following the dominating consequentialist and utilitarian framework justifying the use of predictive algorithms, decision-making appears to be reducible to the computation of the means towards an end. Decision-making, however, always entails the conjunction of a certain level of calculus and the contingency of the institutive-performative moment of decision. Yet, through an aura of “enchanted determinism” (Campolo & Crawford 2020), algorithms systematically mask the contingency of the decisions that go into their creation, producing an affect of inevitability. Nonetheless, given the entanglement between humans and machines and the recursive logic of risk assessment algorithms, the decisions based on predictions induce a de facto determinism. The future escapes its programming less and less; its openness diminishes with every optimization. Yet ironically, the biggest “risk” for humankind that already constitutes the reality for millions of people—climate change—seems to escape preemptive actions, while appearing at the same time incontestable qua predictions.

If predictive algorithms are used to constrain the future by delimiting the frame of what is preemptively allowed to happen following an implicit and unquestioned norm of “the good,” isn’t it necessary to call for the abolition of the predictive framework for algorithms and ask instead how algorithms could be used to open futures rather than infinitely constrain them?

Information

The workshop is public and will be held online. Registration is not required.

Participation link will be made available on the website beforehand.

Programm

December 9

15:00–16:00
Katia Schwerzmann: Introduction

16:00–17:00
Mark Hansen

1 hour break

18:00–19:00
Katherine Hayles

19:00–20:00
Luciana Parisi

December 10

15:00–16:00
Louise Amoore

16:00–17:00
Matteo Pasquinelli

1 hour break

18:00–19:00
Deanna Cachoian-Schanz

19:00–20:00
Alex Campolo

Kontakt

E-Mail: katia.schwerzmann@uni-weimar.de

https://www.uni-weimar.de/futuremaking